This post summarizes some chatbots for Artificial Intelligence or AI generative tools.
List of Artificial Intelligence Chatbots
List of Artificial Intelligence Chatbots:
- OpenAI ChatGPT
- Google Bard
- Microsoft Bing AI
- Meta LLaMA
OpenAI ChatGPT
ChatGPT, developed by American company OpenAI, it is open worldwide to the general public.
Google Bard
Bard, developed by American company Google, is only available in some countries as of 2023.
Microsoft Bing AI
Bing AI is Microsoft’s chatbot. It is open only to testers.
Meta LLaMa
Language Model Meta AI (LLaMA) is still a research project not open to the public.
AI Chatbot Security Concerns
Summary of AI Chatbot Security Concerns
- Upload sensitive information
- Data Protection
- Children Protection
- Malicious use of the application
- Misinformation
Upload Sensitive Information
Users or employees may upload sensitive information to the website.
Depending on the chatbot conditions, this information could be visible to the support team of the tool.
For example when using ChatGPT’s API, the conversations with the chatbot are not visible to OpenAI’s support team and are not used to train the company’s models.
This does not happen for the general public version. The user input is visible to support team and it may be used to train ChatGPT.
Data Protection
Data protection was among the reasons why ChatGPT was banned in Italy in 2023. ChatGPT added a European Union’s GDPR-compliant form before it was readmitted in this country.
Children Protection
Children protection was among the reasons why ChatGPT was banned in Italy in 2023. ChatGPT added an age verification before it was readmitted in this country.
Risks related to Internet Exposure
Chatbots may have security issues that may compromise privacy. For example, ChatGPT bug temporarily exposed AI chat histories to other users, as it can be read on this external link.
This risk is also shared with any cloud tool that is exposed to the internet, like social networks, online banks, etc.
Malicious use of the Application
This aspect is not exclusive to ChatGPT. In fact, any tool that could be used be used for malicious intents (e.g., an e-mail account) presents a risk.
The main concern is that a tool with so much potential as ChatGPT means both potential benefits and potential misuse.
https://hbr.org/2023/04/the-new-risks-chatgpt-poses-to-cybersecurity
Misinformation
Users could be misinformated by chatbots, in many aspects: technical, political, ethical, etc. This could be deliberate or because of errors in the chatbot.
Take into account that this risk exists on any other tool, like media, newspapers, social networks, books, etc.
Organizations that have restricted the use of Chatbots
Organizations that have restricted the use of chatbot:
- Countries
- Italy
- JPMorgan
- Software
- Samsung
- Banking
- Bank of America
- Citigroup
- Deutsche Bank
- Goldman Sachs
- Wells Fargo
An article about chatbox user restriction in Italy can be read on this external link.
An article about chatbot use restriction on Samsung can be read on this external link.
An article about chatbot use restriction on JPMorgan can be read on this external link.
An article about the restriction on banks can be read on this external link.