Contact Jeremy
-
About AI
-
- Is technology neutral?
- How do some applications of AI harm humanity?
- Doesn't AI solve many problems?
- Bias & Discrimination
- Balancing the economic benefits of AI innovation against the harms
- Environmental Impact
- Impact on work
- Deepfake
- An AI Harms and Governance Framework for Trustworthy AI
- Data Practices and Surveillance in the World of Work Country Analyses
- Tech experts are starting to doubt that ChatGPT and A.I. ‘hallucinations’ will ever go away: ‘This isn’t fixable’
- Dialect prejudice predicts AI decisions about people's character, employability, and criminality
- Americans views of AI - Pew Research
- From Burnout to Balance: AI-Enhanced Work Models
-
- Articles coming soon
-
Risk Areas
-
- Articles coming soon
-
- Articles coming soon
-
- Data Practices and Surveillance in the World of Work Country Analyses
- Bill of Digital Rights
- Understanding live facial recognition statistics - BigBrotherWatch
- AI and LLM regulation - UK Government must act now on copyright, say Lords
- ChatGPT provides false information about people, and OpenAI can’t correct it
- noyb urges 11 DPAs to immediately stop Meta's abuse of personal data for AI
-
- How faithful is text summarisation?
- Air Canada - when Chatbots lie
- Deepfake
- Bias & Discrimination
- Copyright Infringement
- AI Chatbot produces misinformation about elections
- Tech experts are starting to doubt that ChatGPT and A.I. ‘hallucinations’ will ever go away: ‘This isn’t fixable’
- Dialect prejudice predicts AI decisions about people's character, employability, and criminality
- Evaluating faithfulness and content selection in book-length summarization
- ChatGPT provides false information about people, and OpenAI can’t correct it
- AI models collapse when trained on recursively generated data
- Bias of AI-generated content: an examination of news produced by large language models
-
-
Case Studies
-
- Air Canada - when Chatbots lie
- Copyright Infringement
- How faithful is text summarisation?
- AI Chatbot produces misinformation about elections
- ChatGPT provides false information about people, and OpenAI can’t correct it
- AI models collapse when trained on recursively generated data
- How ChatGPT is skewing language
-
- Articles coming soon
-
- Articles coming soon
-
- Articles coming soon
-
- Articles coming soon
-
- Articles coming soon
-
-
Protecting Values
-
- An AI Harms and Governance Framework for Trustworthy AI
- The Path to Trustworthy AI: G7 Outcomes and Implications for Global AI Governance
- FAIR: Framework for responsible adoption of artificial intelligence in the financial services industry
- Trustworthy and Ethical Assurance of Digital Healthcare
- AI4People Ethics Framework 2018
- AI4People: On Good AI Governance
- AI4People: 7 AI Global Frameworks
-
-
Policy
-
- UK Government Consultation Response: A pro-innovation approach to AI regulation.
- USA White House Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence
- USA White House: FACT SHEET: President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence
- USA White House: Blueprint for an AI Bill of Rights
- EU AI Act - full text
- UE AI Act: High level summary
- China AI Regulation: commentary by Latham & Watkins law firm
- China’s AI Regulations and How They Get Made, Carnegie Endowment for International Peace
- China: Provisions on the Management of Algorithmic Recommendations in Internet Information Services
- China: Provisions on the Administration of Deep Synthesis Internet Information Services
- China: Measures for the Management of Generative Artificial Intelligence Services (Draft for Comment) – April 2023 [TRANSLATION]
-
- IEEE Prioritizing People and Planet as the Metrics for Responsible AI
- IEEE Standard Model Process for Addressing Ethical Concerns during System Design
- IEEE Standard for Transparency of Autonomous Systems
- IEEE GET Program for AI Ethics and Governance Standards
- Machine Learning Safety: Testing Benchmarks Competition
- AI4People Ethics Framework 2018
- AI4People: On Good AI Governance
-
- Algorithmic transparency and accountability in the world of work
- European Policy Centre: Ai & the Future of Work
- Bill of Digital Rights
- Copyright Infringement
- AI and LLM regulation - UK Government must act now on copyright, say Lords
- UK House of Lords Select Committee Report: Large language models and generative AI
-
- Articles coming soon
-
Category - Risk Areas
Articles coming soon
+ 2 Articles
Show Remaining Articles
Articles coming soon
+ 3 Articles
Show Remaining Articles
+ 9 Articles
Show Remaining Articles