- You and AI News
- Posts
- In Ep #6 ⚡AI Risks, What they are, What To Do 🤯 | You and AI News #6
In Ep #6 ⚡AI Risks, What they are, What To Do 🤯 | You and AI News #6
The OpenAI drama exposed the risks of relying on AI, but there are things you can do to protect yourself.
In Ep #6 ⚡AI Risks, What they are, What To Do 🤯
Read time: 7 mins
The downstream impacts of last week’s OpenAI drama has woken many up to risks of relying on today’s AI, never mind the future risks of AGI.
News - Why today’s AI is helpful but also risky
People worry about the future of AGI, but there’s plenty to worry about today.
Views - A practitioner’s perspective of today’s AI
Real talk about AI business concerns and mitigation strategies.How-to’s - AI tips, tricks, and training
Free courses and other useful resources.
3 ways we’re evolving You and AI News for you
We will have a guest author in every newsletter. This week it’s all-round clever-clogs and good guy 🙂 Sean Fay. He’s a healthy mix of techno-optimist and skeptic.
Themes will focus each newsletter. This week’s theme is RISK. Next week’s theme is CHIPS, with a great piece from guest author Michael Hay. Future episodes include SaaS Apps, Network Engineering, and more!
We are trying a more helpful format of News, Views, and How-to’s. Hopefully this is more helpful that just bullet lists of AI news, keeping you informed, letting you know what our interpretation of the news is for you, and what to do about it.
The AI Rollercoaster - Laughing or Screaming?
Why today’s AI is helpful but also risky
People worry about the future of AGI, but there’s plenty to worry about today.
What did last week’s OpenAI drama mean to you? Some were bemused and considered it juvenile behaviour (can these people really be safe guarding the future of AI?). Others feared OpenAI had discovered the AI-equivalent of Pandora’s Box or the Ark of Covenant and we were all about to die. People already using AI in their work were panicking because, suddenly, a pillar of their process was crumbling in front of their eyes.
For the everyman out there using AI for work, this drama raised real concerns: can I trust Big Tech to deliver AI and run my business on it?
Software developers who are now reliant on pairing with AI like CoPilot to do their work: is their productivity halved or worse when AI is “unavailable”? It’s not easy to switch the AI you code with every day.
Content-heavy businesses where staff use AI to boost productivity… all those “short cuts” die for a while… can anyone remember how to do them manually?
Non-AI tools with AI embedded, so that whole tool is non-functional for a while if back-end AI is down. Is this your sales CRM? Order-to-cash business?
While the AI purists, techno optimisists, and decellerationists argue about how far AGI is over the horizon, normal people just trying to do their jobs are concerned with the impact on their business today.
To mitigate risks with using AI today, the alternatives for people depend on their AI Level (100-Novice, 200-Intermediate, 300-Advanced, 400-Expert):
Level 100 - Not relying on AI at all.
Let’s kick the AI can down the road for a few months. Let things settle down.Level 200 - Look at alternatives to OpenAI.
Is Microsoft Bing (uses OpenAI) more reliable — if it thinks Australia doesn’t exist…
Is Google Bard/Duet good — even though it hallucinates emails…
What about Anthropic’s Claude? Can I just swap out ChatGPT for a different LLM? Do the same prompts give the same answers?Level 300-400 - Build my own, private local AI. Using Hugging Face models and LangChain code, people are even running LLMs on their laptop.
What is the “Effective Altruism” movement?
e/acc vs e/alt
Pejoratively called “Decels” by the e/acc (Accelerationists), the Effective Altruism movement wants to slow the pace of AI innovation. Scared of AGI, untrusting of big tech, and want globalist / governmental control over AI development. For example, they promote “sovereign AI”. Check out more here.
Views - A practitioner’s perspective of today’s AI
Real talk from a practitioner about AI business concerns and mitigation strategies. By Sean Fay.
Chaos at the top is usually indicative of chaos below (or maybe drives chaos below?), and chaos in the operations of a critical piece of one’s infrastructure (perhaps not critical to all customers of OpenAI), can cause nervousness and gnashing of teeth within IT organizations.
Should I, as a leader of an organization that uses OpenAI, have a backup plan?
Much like when “the cloud” was a new thing one should not put all their eggs in one basket. There are options available that may give you some piece of mind and allow you to sleep soundly at night.
If you are an Azure shop then you likely already know that Microsoft is a very big investor in OpenAI, and has extended ChatGPT and its underlying technology into their Azure offerings, this is a good hedge against potential chaos at OpenAI in the aftermath of the news there.
Pick another LLM, there are the offerings from Anthropic, whose ChatGPT competitor Claude does well with certain tasks, but is not as strong as GPT-4 (hint, nobody is as good right now, but there are options that are close, and getting better).
As of November 21st, Anthropic just released Claude 2.1, which has a massive 200k context window and full API support, you can check it out here.
Go open-source with options like Meta’s Llama 2 and others (you can find many options on the Huggingface hub).
What is a Context Window?
The easiest way to wrap your head around it is the AI’s “memory”. For instance, if you want to analyze a book, and then you chat with it and ask questions about it. You will be limited to how many questions you can ask, and the answers you get before the context window is used up, and the AI starts forgetting parts of the book. For context, the current heavy-weight model, OpenAI’s GPT-4 Turbo, has a context window of 128k. A context window of 100k is about 75,000 words or about one average novel. Doubling that gives you a lot more room for context that you want the LLM to have available to it.
So what should I do, Sean?
Here is one idea that allows you to stay with OpenAI but easily pivot should the winds change and the hairs on your neck stand up (I call this the risk management sense).
There is a framework called LangChain, that was originally devised as a way to quickly build prototype LLM-based applications (aka ChatGPT), providing developers with tools to make that simple and easy.
LangChain has morphed a bit and is much more of a production-ready (some would argue though that any abstraction layer, of which LangChain is certainly one, precludes it from being production-ready) platform to build an application on, which is tool agnostic.
What if your IT department wants to pivot? They decide they do not like what’s happening at OpenAI try another option (Google’s Bard, Anthropic’s Claude, etc.)? But is this easy?
How hard is it to pivot?
How long will it take to get productive?
Do we have the right people to swap LLMs, and manage multiple LLMs?
The questions are heavy, I get it. Answering them might feel like more work than just putting up with a bit of craziness at OpenAI.
LangChain is a way to help “insulate” yourself from one LLMs unreliability and to pivot not just what LLM to use, but also what Vector stores (think database for those not already familiar), or other common components of an LLM-based application, without having to re-write how you interact with and consume that LLM..
When you are up and running with LangChain, swapping LLMs is as simple as changing a couple of lines of code and bam, you are using Bard, Claude, or whatever the next big breakthrough is. This approach, a Level 300+ move which is not for everyone, is a true hedge against chaos at OpenAI, and Google dropping Bard (Google never drops a product right?), or the next big news story that we don’t know yet.
I think this kind of AI risk mitigation is worth a look at least, and should be one option in your discussions about what your backup plan is, because if you don’t have one, you should.
How-to’s - AI tips, tricks, and training
Free courses and other useful resources.
We’ve got a heavy focus on FREE training today.
Spend an hour or two on any of these at the weekend, and next week you’ll be further ahead on your personal AI journey from level 100 → 200 → 300 → 400 🫠
IN NEXT WEEK’S EP. #7 OF YOU AND AI NEWS
Next week’s theme will be CHIPS (CPUs, not potatoes) — what some people thing is the most important constraint on AI development.
Our new guest author, Michael Hay, will be sharing his perspective as a tech industry leader, with the latest chip-oriented news, views, and how-tos — for example, if you’re GPU poor, what can you do?
Get involved 👍️ 👀 🙏 🧨
We have exciting plans to develop the You & AI Newsletter and we want to involve YOU!
We are not just looking for feedback on You & AI News content (does it help you? Are we missing something? What do you need?).
We are recruiting guest authors for a new “Real world perspective” series. If you think “Nobody would be interested in what I have to say” then you’re probably wrong.
At You & AI News we want to write for as many people as possible: from their perspective; about how AI helping or hindering them; what their hopes and fears are; actionable advice to make progress and get on the AI bus instead of being under it.
If you’re interested, reply to the newsletter, share it with someone you think would be interested, or just send us an email to [email protected], or send us a message on X, or visit our LinkedIn page.