Hire a web Developer and Designer to upgrade and boost your online presence with cutting edge Technologies

Monday, 15 May 2023

We need Government regulation and ban AutoGPT

 I have witnessed firsthand the potential of AI and large language models (LLMs) like ChatGPT in transforming industries, software's, productivity and improving lives. However, as with any groundbreaking technology, there is a critical need to strike a balance between innovation and responsible use and I think we are already missing the balance. If alarm bells aren't ringing then all I can tell you is either people haven't understood what is really happening or are too invested in it.

At here, we have successfully integrated LLMs like ChatGPT into our systems to provide better and more efficient services to our clients. So don't get me wrong we are highly invested in this technology our selves. This does not mean we do not have a larger social responsibility.

Not all applications of AI are created equal. For example information search, questions or even asking for data from different systems is not a risky endeavor. But with the emergence of AutoGPT and ChaosGPT, it has become clear that we need to draw a line between what is responsible and irresponsible use of AI.

If you haven't heard of AutoGPT and ChaosGPT these are examples of ChatGPT that is connected with python executing code. You tell the code what you want it to do and it will try and execute code by itself to achieve these goals. This might sound like a great thing and even a new advancement but if you allow AI to generate and execute code and move from READ only to WRITE actions without any boundaries (basically go from information devices to actually executing the actions) then we are going into the domain of every negative science fiction movie you have ever seen. 

Now you might think this is fear mongering or you might wonder why a person who has a vested interest in seeing LLM's succeed is talking about the worse case scenarios. The simple fact is we already know that you can use ChatGPT type technologies to create biological weapons (AI suggested 40,000 new possible chemical weapons in just six hours), ChatGPT to create malware  . There are many other examples where even without AI we created havoc e.g. the infamous "flash crashes" in the stock market, where algorithms triggered rapid and unpredictable fluctuations in stock prices making the entire market crash in a couple of seconds. If we don't want a similar outcome with our energy grids, with our IT infrastructure and pretty much every other tool that is connected to the internet we need to assess what we are doing. 

I want to make it clear that I am not advocating for halting AI progression, nor do I support the calls made by Elon Musk and others to slow down the development of AI in general. Instead, I believe it is crucial that we focus on finding ways to harness the incredible power of AI while mitigating the risks it poses. One key step in this is regulating or even blocking all code generation activities in the public domain that have the ability to write and execute functions. The risks far outweigh the benefits and it is a calamity waiting to happen. This would still allow companies to use this technology of code generation but would block bad actors or actors that do not have enough knowledge from using technology irresponsibly. We as a society do this for weapons and medicine where we restrict who can and how can someone get access to them and if we do not understand that AI today has the same if not higher impact as these then we will be too late. 

It is surprising that the EU regulators have have generally been a lot more responsible about data and risks of technology have not acted and that the rest of the world regulators are not even really looking at acting on these major points. (Not counting the blanket ban by Italy on all of ChatGPT which was a bit naïve in my opinion as it only brought us the fun PizzaGPT.

It is surprising that organizations like OpenAI, which advocate for responsible AI development, are allowing the same risks to persist. It is my hope that the AI community, including developers, researchers, and companies like ours, can come together to establish best practices and create a safer AI landscape for all.

AI is an incredible tool with the potential to transform our world in positive ways. However, we must also recognize the potential risks that come with AI executing code and work together to create regulations that promote responsible use. At Enterprise Bot, we are committed to using AI ethically and responsibly, and we urge others in the industry to join this mission to ensure a secure and beneficial AI future for everyone.

No comments:

Post a Comment

Connect broadband

How to Develop a Character-Based Neural Language Model in Keras

  A   language model   predicts the next word in the sequence based on the specific words that have come before it in the sequence. It is al...