Contact Jeremy
AI Education & Case Studies
Search or browse our knowledge base for information on AI, human values at risk, case studies, policy & standards around the globe and AIFCS advocacy.
-
About AI
-
Risk Areas
-
Case Studies
-
Protecting Values
-
Policy
What is AI
Risks to Humanity
- Is technology neutral?
- How do some applications of AI harm humanity?
- Doesn't AI solve many problems?
- Bias & Discrimination
- Balancing the economic benefits of AI innovation against the harms
- Environmental Impact
- Impact on work
- Deepfake
- An AI Harms and Governance Framework for Trustworthy AI
- Data Practices and Surveillance in the World of Work Country Analyses
- Tech experts are starting to doubt that ChatGPT and A.I. ‘hallucinations’ will ever go away: ‘This isn’t fixable’
- Dialect prejudice predicts AI decisions about people's character, employability, and criminality
- Americans views of AI - Pew Research
- From Burnout to Balance: AI-Enhanced Work Models
Future of AI
AI For Good
- Articles coming soon
Authentic Relationships
- Articles coming soon
Cognitive Acuity & Creativity
Dignity of Work
Moral Autonomy
- Articles coming soon
Privacy & Freedom
- Data Practices and Surveillance in the World of Work Country Analyses
- Bill of Digital Rights
- Understanding live facial recognition statistics - BigBrotherWatch
- AI and LLM regulation - UK Government must act now on copyright, say Lords
- ChatGPT provides false information about people, and OpenAI can’t correct it
- noyb urges 11 DPAs to immediately stop Meta's abuse of personal data for AI
Truth & Reality
- How faithful is text summarisation?
- Air Canada - when Chatbots lie
- Deepfake
- Bias & Discrimination
- Copyright Infringement
- AI Chatbot produces misinformation about elections
- Tech experts are starting to doubt that ChatGPT and A.I. ‘hallucinations’ will ever go away: ‘This isn’t fixable’
- Dialect prejudice predicts AI decisions about people's character, employability, and criminality
- Evaluating faithfulness and content selection in book-length summarization
- ChatGPT provides false information about people, and OpenAI can’t correct it
- AI models collapse when trained on recursively generated data
- Bias of AI-generated content: an examination of news produced by large language models
Environment
Accessibility
Chat Bots
- Air Canada - when Chatbots lie
- Copyright Infringement
- How faithful is text summarisation?
- AI Chatbot produces misinformation about elections
- ChatGPT provides false information about people, and OpenAI can’t correct it
- AI models collapse when trained on recursively generated data
- How ChatGPT is skewing language
Computer Programming
Creative Industries
Criminal Justice
- Articles coming soon
Cybersecurity
Device Security
- Articles coming soon
Education
Financial Services
Health Care
Journalism
Legal
- Articles coming soon
Manufacturing
- Articles coming soon
Pharmaceutical
- Articles coming soon
Surveillance
Transport
Worker Rights
Social Media
Work & Productivity
Framework for Trustworthy AI
- An AI Harms and Governance Framework for Trustworthy AI
- The Path to Trustworthy AI: G7 Outcomes and Implications for Global AI Governance
- FAIR: Framework for responsible adoption of artificial intelligence in the financial services industry
- Trustworthy and Ethical Assurance of Digital Healthcare
- AI4People Ethics Framework 2018
- AI4People: On Good AI Governance
- AI4People: 7 AI Global Frameworks
Justice
Trust
Country Legislation
- UK Government Consultation Response: A pro-innovation approach to AI regulation.
- USA White House Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence
- USA White House: FACT SHEET: President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence
- USA White House: Blueprint for an AI Bill of Rights
- EU AI Act - full text
- UE AI Act: High level summary
- China AI Regulation: commentary by Latham & Watkins law firm
- China’s AI Regulations and How They Get Made, Carnegie Endowment for International Peace
- China: Provisions on the Management of Algorithmic Recommendations in Internet Information Services
- China: Provisions on the Administration of Deep Synthesis Internet Information Services
- China: Measures for the Management of Generative Artificial Intelligence Services (Draft for Comment) – April 2023 [TRANSLATION]
Standards
- IEEE Prioritizing People and Planet as the Metrics for Responsible AI
- IEEE Standard Model Process for Addressing Ethical Concerns during System Design
- IEEE Standard for Transparency of Autonomous Systems
- IEEE GET Program for AI Ethics and Governance Standards
- Machine Learning Safety: Testing Benchmarks Competition
- AI4People Ethics Framework 2018
- AI4People: On Good AI Governance
Policy Gaps
- Algorithmic transparency and accountability in the world of work
- European Policy Centre: Ai & the Future of Work
- Bill of Digital Rights
- Copyright Infringement
- AI and LLM regulation - UK Government must act now on copyright, say Lords
- UK House of Lords Select Committee Report: Large language models and generative AI
AIFCS Advocacy
- Articles coming soon
Public Surveys
Popular Articles
Newest Articles
Popular Articles
Newest Articles
Recently Updated Articles
FAQs coming soon
Help Links
Need assistance or have questions? Explore our list of resources below.
Contact Us
Reach Out to Our Team for General Inquiries.
Community Forum
Participate in Community Discussions and Offer Your Support to Others.
Open Support Ticket
If You Need Assistance with an Issue, Submit a Support Request.