Artificial intelligence hit ChatGPT has put itself in the spotlight due to its special abilities, but tall trees catch a lot of wind. Schools are not happy about it, writers are concerned and now another group has joined. Major US news media (according to Bloomberg) are demanding compensation from OpenAI, its creator. For example, their articles would have been used to train the AI.
Training artificial intelligence
Although artificial intelligence is intelligent in itself (after all, this is how it is developed), it does require a certain basic knowledge. He gains it by processing a lot of data and in this way teaching himself which patterns pass by, for example, in texts or other types of data. After all, how can you expect AI to recognize a flower from a photo, for example, if it has never learned to link that flower to it? So you will have to feed such an AI all pictures of flowers with the name of that flower attached to it, so that it learns to recognize which flower bears which name.
Logical, but is it fair that AI is trained with articles by journalists? Various American publishers think not. Dow Jones, the company behind Wall Street Journal, wants to see money from OpenAi. They note: “Others who use the work of the Wall Street Journal editors to train AI systems must purchase a license to do so. OpenAI has not yet approached us for this,” said a Dow Jones consultant.
ChatGPT trained with articles
OpenAI has not yet commented on the news, but has already indicated that it does indeed use articles from various English-language media, such as CNN, the BBC, New York Times, The Guardian and Washington Post. Many news media strongly disagree. CNN has already ruled on the matter: it violates its terms of use. It is also very sensitive: ChatGPT has the potential to take over the work of writing professions. It therefore feels extra crooked that precisely the articles of those writing people are used to train this ‘enemy’.
On the other hand, things won’t go that fast either: there are still a lot of sloppiness and insensitivities in ChatGPT’s texts. Not only are they regularly wrong: sometimes it is ready-made propaganda that they spread. As clever as it is that it is all possible and that valuable information is indeed often rolled out, the question is whether the risk of incorrect information is not too great to make such powerful software available worldwide. For a while, the website Cnet only worked with ChatGPT for all its articles: it quickly reversed that because the information was wrong. So for now we have to see it as an interesting experiment, but yes: there are of course people who really believe in this and who may make different life choices due to factually incorrect information.
The threat of AI
And yes, professional groups that feel threatened by the wave of AI are speaking out. There are several lawsuits that are pending against Dall-E, for example, which creates works of art based on a few typed words. That AI must be trained in a certain way and although Vincent van Gogh himself can no longer complain about this, other artists can of course. And they do. ChatGPT had 100 million active users in January: that tool has only been out for a few months (!). So it reaches a huge number of people and that is precisely why the inaccuracy worries people.
It’s certainly not just ChatGPT that has these problems. Microsoft’s Bing and Google’s Bard are also still in their infancy. Google even saw its share price plummet when Bard made a mistake in one of his first public demonstrations. Bing also makes mistakes, but also doesn’t always show his best side in terms of ‘personality’. In fact, Bing AI would very often get angry with users. Microsoft doesn’t want its robot to get overworked: it has now limited the number of conversations you as a user can have with the tech on a daily basis to fifty sessions (with a maximum of 5 questions at a time).
Laws and regulations
It’s great to see the great influence technology can have, but many people still scratch their heads: are we doing the right thing? What about laws and regulations? What if all that AI turns against us? Due to the great competition, the question is whether the ethical dilemmas associated with these tools receive enough attention. For example, Google would have announced Bard way too soon, but that probably had everything to do with how ChatGPT was called the new Google by some media. Media who are now suing ChatGPT in turn. 2023 promises to be another interesting year. If you don’t read it here next year, maybe on ChatGPT.
Laura Jenny
When she’s not tapping, she’s traveling around the wonderful world of entertainment or some cool place in the real world. Mario is the man of her life,…