Creators of a new AI fake text generator say it may be too dangerous

Many people have been fearing about the use, and misuse, of AI or Artificial Intelligence. If there are good things about the usage of AI then there are bad things as well. Now, this misuse has been feared by the creators of an AI tool as well. Yes, creators of a revolutionary AI system named “Deepfakes for text” have refused to make their tool public. The reason is that they fear that the tool will be widely misused. This tool is built in such a way that it can write news stories as well as other works of fiction on its own.

This tool’s development has been done by a nonprofit research company named OpenAI. Coincidentally, Elon Musk has backed this company as well. The company says that their AI tool, GPT2 is so good that the misuse of it is highly likely. Therefore, they are breaking the normal convention of making their research public. In the meantime, they are assessing ramifications of the technological breakthrough.

Now, GPT2 is essentially a tool which is used to generate text. Therefore, it has to be fed text of size varying from a single sentence to an entire page. After that, the tool is asked to write the next part of that text from its understanding of what should come next. This is similar to autocompletion of text which was recently introduced on Gmail for Web.

This accuracy of the GPT2 has led to OpenAI thinking about the possibility of what malicious users might be able to do with it. This is because OpenAI made one version of GPT2 with a few modest tweaks that can be used to generate infinite positive or negative reviews of products. Similarly, this can be made by anyone and it can be widely used for giving false information to the users. Therefore, OpenAI is seeing the possibilities and will come up with a solution soon.