By Editor Morten B. Reitoft
With ChatGPT, you can ask almost anything and get very complex, and for the most get accurate information back, written in easy-to-understand English.
OpenAI has taken AI to the next level, and you will see, or have already seen, endless examples of how ChatGPT can answer complex questions. As the model learns, it will improve, and though the OpenAI disclaimer is clear about the limitations of their services, people have already started using all their services. The quality of the conversation is so good that an artificial 'person' spent 'his' Christmas on Twitter discussing politics without ANYBODY realizing they were chatting with a robot.
OpenAI's CEO Sam Altmann said in a video that he believes there will be a limited number of very comprehensive and massive AI providers as the technology requires a lot of computer power, but also monitoring, further development, etc.
He also said that the strong AIs would be industry-based solutions that teach the AI the industry details - but this is already happening using the vast AI providers.
Solutions like Descript and Runway use OpenAI in their products. The online video editor Runway enables users to add background details that don't exist but that are generated on demand. It also allows you to remove elements from an entire film, and the AI will recognize the element throughout it.
What we see now is only the tip of the iceberg. AI will 100% give humanity a lot of great things - though some will see their work disappear. I couldn't help smiling as The Economic Times, July 5th, 2022, published an article where the author Satyam Sharma, believes AI can't replace the following jobs: "Psychologists, caregivers, most engineers, human resource managers, marketing strategists, and lawyers are some roles that cannot be replaced by AI anytime in the near future.”
I wouldn't bet on that, but regardless, AI will give us new work, new challenges, and new solutions.
In China, the government has recently passed a law that requires artificially made content to be disclaimed. The problem is most likely not the disclaimed content but the content that people with bad intentions may create artificially - they will not disclaim. How we will differentiate content in the future from AI is a big question, so why not ask ChatGPT?
I asked ChatGPT: "How will humans differentiate content created by humans or by AI in the future?"
And here is the answer: "There are a few ways that humans might differentiate content created by humans or by AI in the future:
AI is here to stay, so the good question is how we can use it for the better. Interesting times!