1st October marked our second ‘Sip with the expert’ with Kidist Dinku Guta presenting on ‘Can AI think’? Here’s a recap of the conversation and the lively Q&A that followed.
Key points of the presentation:
- AI as a Pattern Recognition: AI operates by identifying patterns in user behavior, similar to familiar systems like Netflix suggesting shows or Siri responding queries. The goal is for the AI to act like a “consistent partner” , predicting future needs based on past interactions.
- Limitation by Data: AI is limited strictly to the data it is provided. It is incapable of reaching true human levels of creativity or emotional intelligence because it is restricted by the information it has been trained on. However, it can simulate aspects of them.
- Training and User Guidance: Users are constantly, and often indirectly, training the AI by interacting with it (such as giving feedback like a thumbs up or thumbs down on responses). This interaction helps developers refine models over time. PS swearing doesn’t help the AI understand you better—it’s precision, not profanity, that improves results.
- Energy and Environmental Cost: Training a single AI model consumes an immense amount of energy. It is estimated that training OpenAI’s GPT-4 consumed 50 gigawatt-hours of energy, which is enough to power San Francisco for three days. This massive energy use requires data centers to implement cooling systems, which has led to debates regarding environmental impacts, including concerns about contaminated water in nearby neighborhoods because of use of groundwater for cooling application.
- Future Verifiability: Newer AI versions are being developed specifically to address the problem of fake information by offering verifiable links and references to reputable publications or scientific papers in their answers.
- AI is a powerful, yet limited, tool that should be applied strategically, rather than universally, especially given its current developmental stage, reliance on large datasets, and substantial resource demands.
Some Questions & Answers we delved into:
The data you input into AI is not automatically confidential, unless you are using a system with explicit privacy protections (such as an enterprise version). While individual conversations are not used to retrain the model in real time, your data may still be stored and reviewed to improve system performance. Therefore, you should avoid sharing sensitive personal, financial, or company information that you would not want exposed.
Yes, when a user interacts with the AI (e.g., asking questions or giving a “thumbs down” or “thumbs up” feedback), they are providing information and guiding it, which indirectly trains the system for future responses. These feedbacks will be used by developers to improve AI models over time, though the AI doesn’t usually learn directly from each individual conversation real time.
Yes, AI is restricted in terms of human-level creativity and emotional intelligence. While it can generate creative outputs and simulate emotional expression based on patterns in data, it doesn’t actually experience emotions or original thought the way humans do.
Disclaimer: This content is shared as part of our networking discussions only and does not represent formal advice or an official position.