We’ve discussed platforms such as Google, OpenAI, Cohere, and Microsoft bringing AI to the enterprise space, and we’ve discussed the information leakage risks associated with implementing AI in a work environment. In our last article in this series, we want to highlight some of the macro concerns discussed in academia and the media around AI and our possible lack of control over these technologies.

First, have you seen the New York Times article about the lawyer who used ChatGPT to file his brief? All of his references and relevant court decisions cited in the brief were false, fabricated by AI. This is a warning to everyone who may be tempted to use this amazing technology to research, justify, or otherwise establish facts in any setting that you still have to do the hard work and ensure that what you are saying is in fact true and to verify your sources. While many are using ChatGPT as a search engine, it turns out that like everything on the internet, you need to be aware not everything ChatGPT tells you is true. Since Large Language Models like ChatGPT are trained on the Internet, they are also trained on all the lies on the Internet.

Much is also being made of the danger of a Terminator-style Skynet AI becoming self-aware and deciding humankind is a parasite to be eliminated. There is a short and interesting discussion between Geoffrey Hinton, noted for his work on artificial neural networks and formerly part of Google Brain (departed in May 2023 citing concerns about the risks of AI) and Andrew Ng, an industry leader in AI, founder of several AI enterprises and Stanford University professor (and Hybridge client), on some of the larger issues around AI today: a lack of cohesive and comprehensive point of view from the scientific community and whether AI “understands” what it is recommending and what that it means to understand. These two points are critical as in the coming months and years we will have anyone from politicians to business leaders looking for guidance and “expert opinions” on which to base policy and practice both in government and business. Insights on AI Catastrophic Risks: Conversations with Geoff Hinton

While the debate is in its infancy, the ramifications of its outcome will likely affect us all. It is both interesting and prudent to be informed and make your own judgment: use ChatGPT yourself, do your own research from a broad range of credible sources, consider all points of view, and then make up your own mind.

Enjoy the journey!

Share this blog: