Deepfake is a term used to describe the creation of fake artefacts, usually people, based on learning from images and speech of the real person. Various well publicesed examples have been around for a few years, such as the Channel 4 production of a fake speech by the UK’s Queen, broadcast at Chritmastime in 2020. Various politicians and celebrities from Barak Obama, Mark Zuckerburg to Tom Cruise have also suffered from being used in deepfake videos (Creative Bloq). The algorithms are also being used to bring people “back from the dead” by using old recordings and images to create interactive avatars of people who have died. Perhaps the most famous example is a recreation of Salvador Dali as a host for the Dali Museum in Florida.
Read More
It is true that more people are becoming aware of the dangers of some applications of AI like Social Media but they are deliberately exploiting our vulnerabilities, especially of the younger generation. However, there are many other applications of AI that are hidden, such as their use in credit checking and building a profile of you from your internet activity.
Read More
Unfortunately some innovations can harm humanity, for example -l drugs and vaccines have to be trialled and approved by a regulator to protect the population. Modern synthetic drugs are an “innovation” but result in addiction so in many states they are criminalised. We usually accept the need for regulation in such areas.
Read More
Regulation should be targeted at dealing with the root cause of many of the problems that AI creates - free access to private data and inappropriate use where it harms humanity. This is regulation - light but tight, because it would define where AI cannot be used and deter big tech (the biggest problem) and any SME or start up from building a business model or product on private data.
Read More
It may be much cheaper but companies have no motivation to self regulate when profits are determined by ever more uses and abuses of AI. Facebook is a classic case in point. Despite all the posturing by the company, it is the algorithms that are in control, driving engagement and advertising profit. The only way self regulation would work in this example would be for the companies using user data to manipulate them to stay on the platform, would be for them to remove the algorithms and not harvest users internet activity. This would massively impact profits which is why they won't act.
Read More
crowd from surveillance cameras. These computers are not really intelligent in the way that humans are but can appear to be better than a human when used in very limited tasks like playing chess or Alpha Go.
Read More
This is sometimes true and is often one of the main reasons that we use AI applications such as digital assistants like Alexa or Siri. Whilst there is nothing wrong with convenience per se, we need to consider the unforeseen costs of convenience. Children are becoming less empathetic, less willing to engage in one on one relationships and can develop stereotypical expectations from the behaviour of such assistants and robots.
Read More
AI can provide many benefits that are benign in terms of impacting what it means to be a human being, yet the push by big tech to create applications - “just because we can” - is creating bigger problem for humanity in terms of reducing freedom, privacy, moral responsibility, jobs.
Read More