For what was once a strange and unrefined concept, Generative AI has almost become a part of our lives. Even developers now use these programs to write and debug codes (despite companies warning against it). However, ChatGPT, which was hacked earlier this year, questioned the security of these solutions and whether zero-knowledge blockchains can improve Generative AI tools.
Before ascertaining whether blockchain can improve these content-generation tools, let’s examine how they work.
What are Generative AI Tools?
Generative AI tools create new content like text, images, music, or code based on patterns learned from data. The term “generative” comes from their ability to generate new, original content rather than simply analyzing or predicting data.
For a better understanding, see generative AI tools as “chatbots on steroids”.
With your regular chatbot, the responses are limited, as they can only reply based on the finite information they have. This is why you get funny replies or error messages if you do not use specific keywords when communicating with a chatbot. Generative AI tools are a different breed, built on machine learning principles.
Developers design Generative AI to look for the most important part of the inputs, identify patterns, learn, and improve its responses with time. A good example is training artificial intelligence tools with pictures of a moose from different angles and views. With time, AI will identify the most essential features of a moose and use them to generate responses.
Furthermore, these AI tools will continue absorbing new information as it processes more content requests. Using the moose example, let’s say someone showed a different picture of a moose. This program will use the most critical features like horns, eyes, and hoofs to identify a moose.
Meanwhile, it will absorb any possible new features, as captured in different pictures of a moose. ChatGPT and similar tools will then add this new information to their database. In turn, the new details serve as a reference source when giving similar responses in the future. The process is almost like artificial intelligence teaching itself to be smarter, and it explains why these tools can respond to more complex prompts.
ChatGPT Hack: What Went Wrong?
Open AI, the company behind ChatGPT, utilizes an open library called Redis, allowing third-party developers to contribute to the program. This approach improves the robustness of and amount of information available, as more people can input voluminous and diverse information.
However, hackers exploited a bug in Redis and were able to access private user information. Due to this default in the open library, contributing developers could see people’s chat history and the payment details of those who used ChatGPT-4.
In response, OpenAI fixed the bug, while improving the robustness of Redis’s security and launching a bug bounty program. Developers can earn between $200 and $20,000 for helping the artificial intelligence company identify loopholes in its program.
According to Twingate, this update will reduce the likelihood of errors at extreme loads. While the news did not specifically say so, a system overload may be partially possible for this issue. It is not uncommon for systems to malfunction whenever they are dealing with requests beyond their optimal capacity.