top of page

AI

AI
HALLUCINATIONS

AI hallucinations are a complex phenomenon that reveal the intricacies and limitations inherent in machine intelligence, manifesting as false or nonsensical outputs that deviate from reality. This issue arises from the probabilistic and pattern-recognition mechanisms of AI models, such as GPT, which synthesize responses based on statistical likelihoods rather than grounded understanding. While these hallucinations often manifest as confident but inaccurate outputs—such as fabricated scientific references or misinterpreted environmental sounds—they underscore deeper systemic flaws and philosophical questions about the nature of intelligence and knowledge.

​

READ MORE

BOOK REVIEW

The AI Mirror: How to Reclaim Our Humanity in an Age of Machine Thinking. By Shannon Vallor


Recent events vividly reflect Vallor’s central arguments. Issues such as algorithmic bias, misinformation, and environmental challenges highlight the risks she warns about. For example, systems like OpenAI’s GPT-4 have been criticized for perpetuating biases embedded in their training data, illustrating how AI often mirrors past injustices. Similarly, misinformation campaigns powered by AI tools have undermined democratic processes in recent elections, deepening political polarization and eroding public trust.

READ MORE

 

Screenshot 2024-12-19 at 12.21.53.png
Screenshot 2024-12-23 at 15.24.42.png

BOOK REVIEW

Moral Codes: Designing Alternatives to AI

"Moral Codes: Designing Alternatives to AI" by Alan F. Blackwell presents a compelling critique of the current trajectory of artificial intelligence (AI) research, advocating for a paradigm shift towards the development of more expressive and human-centric programming languages. Blackwell, a seasoned programming language designer since 1983, contends that the prevailing AI agenda has deviated from its original promise of alleviating mundane human labor, instead encroaching upon domains of human creativity and emotional engagement.

 

The author introduces the concept of "MORAL CODE," an acronym for More Open Representations, Access to Learning, and Control Over Digital Expression. He posits that by focusing on these principles, we can develop programming languages that empower users to articulate their intentions more effectively to computers, thereby fostering software that serves societal well-being rather than merely enhancing efficiency or profitability.

​

READ MORE

We have been conditioned and imprinted, much like Pavlov's dogs and Lorenz's geese, to mostly unconscious economic stimuli, which have become a global consensus and a global source of diseases.

Poenaru, West: An Autoimmune Disease?

  • LinkedIn
bottom of page