Eva, this is an insightful article. I learned so much. You very skillfully combined your knowledge of education and AI moving into a very new realm. Very few people are talking about this topic in the way you have so artfully. They are either too technical for me or way too high level. Your article masterfully hit the Sweet spot. I am very interested in this topic and your article really opened my eyes to the full capabilities of how this could be used in business and also for my personal projects. At the end of the article, you talked about how you would organize an organization around agentic AI. This was very helpful. Thank you so much. ❤️
Since you had stepped into technology, do it professionally. Every technology has pros and cons or even aspects that make the technology bad despite how many good words have been said by its promoters. This article omits cons and compliance with already emerging regulations.
Example: our Internet theses data is full of personal data, e.g., on Facebook. AI Agent MAY NOT use this information without the explicit consent of the personal data owner (GDPR). So, AI agents may not collect information without compliance with this regulation.
The EU AI Act demands that every outcome of the AI be controlled by the Human Agency, especially when AI works in the socio-cultural areas. Agentic AIs violate this principle and policy (soon).
We still do not know how to control security in the chains of AI agent invocations (service-orientated principles applied to interactions are not accepted yet).
Thus, in the absence of negative sides of the AI agent, the article was expected to review what the user should or may do if the AI-based system would piss him or her off.
Appreciate you engaging; these are important topics.
That said, your comment misrepresents both the intent and content of my article.
I never claimed agentic AI is risk-free. In fact, I emphasize the need for human oversight, critical evaluation, and clear intent. These are core safeguards in any responsible AI system. The piece was written to demystify a rapidly evolving field and equip professionals with the mental models to engage thoughtfully and not to provide a legal or regulatory deep-dive.
Bringing up GDPR or the EU AI Act is fair, but assuming I ignored them out of negligence rather than scope misses the mark. Raising concerns is valuable. So is reading generously.
I’ll continue writing critically and constructively about these shifts; because we need more nuanced dialogue, not fear-based assumptions.
I do appreciate that this post is an article rather than research. However, at the level of popular presentation of this technology (according to other publications), the "human oversight, critical evaluation, and clear intent” are introduced as the means of ‘best practices’, which convey a risk-free status, unfortunately. It is very important to consumers and decision-makers to recognise associated risks of this technology on the top of the development dexterity because businesses should provide explicit proactive and reactive controls for each of such risks (if it is a responsible business).
Eva, this is an insightful article. I learned so much. You very skillfully combined your knowledge of education and AI moving into a very new realm. Very few people are talking about this topic in the way you have so artfully. They are either too technical for me or way too high level. Your article masterfully hit the Sweet spot. I am very interested in this topic and your article really opened my eyes to the full capabilities of how this could be used in business and also for my personal projects. At the end of the article, you talked about how you would organize an organization around agentic AI. This was very helpful. Thank you so much. ❤️
Since you had stepped into technology, do it professionally. Every technology has pros and cons or even aspects that make the technology bad despite how many good words have been said by its promoters. This article omits cons and compliance with already emerging regulations.
Example: our Internet theses data is full of personal data, e.g., on Facebook. AI Agent MAY NOT use this information without the explicit consent of the personal data owner (GDPR). So, AI agents may not collect information without compliance with this regulation.
The EU AI Act demands that every outcome of the AI be controlled by the Human Agency, especially when AI works in the socio-cultural areas. Agentic AIs violate this principle and policy (soon).
We still do not know how to control security in the chains of AI agent invocations (service-orientated principles applied to interactions are not accepted yet).
Thus, in the absence of negative sides of the AI agent, the article was expected to review what the user should or may do if the AI-based system would piss him or her off.
Appreciate you engaging; these are important topics.
That said, your comment misrepresents both the intent and content of my article.
I never claimed agentic AI is risk-free. In fact, I emphasize the need for human oversight, critical evaluation, and clear intent. These are core safeguards in any responsible AI system. The piece was written to demystify a rapidly evolving field and equip professionals with the mental models to engage thoughtfully and not to provide a legal or regulatory deep-dive.
Bringing up GDPR or the EU AI Act is fair, but assuming I ignored them out of negligence rather than scope misses the mark. Raising concerns is valuable. So is reading generously.
I’ll continue writing critically and constructively about these shifts; because we need more nuanced dialogue, not fear-based assumptions.
I do appreciate that this post is an article rather than research. However, at the level of popular presentation of this technology (according to other publications), the "human oversight, critical evaluation, and clear intent” are introduced as the means of ‘best practices’, which convey a risk-free status, unfortunately. It is very important to consumers and decision-makers to recognise associated risks of this technology on the top of the development dexterity because businesses should provide explicit proactive and reactive controls for each of such risks (if it is a responsible business).
Good points. This was an article not a research paper.