In the ever-evolving landscape of artificial intelligence, a new machine-learning model named Life2vec has emerged, sparking both fascination and concern. Life2vec, drew from extensive data over a four-year period, using information on millions of residents in Denmark, to produce an 80 percent accuracy rate in predicting mortality . The algorithm used birth dates, sex, employment, location, and data from the universal health care system in Denmark.
Life2vec’s foundation lies in its ability to process and analyse vast amounts of data, based on architectural models similar to popular AI chatbots like OpenAI’s ChatGPT and Google’s Bard. As impressive as this may be from a technological standpoint, the advent of such predictive AI raises profound ethical concerns.
Denmark, where Life2vec is currently in use, has stringent privacy protections and antidiscrimination laws in place. However, the situation is starkly different in the United States, where no federal data privacy laws exist. The potential for abuse of such sensitive information is a cause for apprehension. Considerations must be given to how this predictive technology could impact various aspects of life.
One area of concern is the potential misuse of life and medical insurance. Life2vec’s predictions could be utilised by insurance companies to adjust premiums or deny coverage based on an individual’s predicted lifespan and medical insurance providers might also leverage this information to make decisions about coverage and treatment plans. Such practices could lead to increased discrimination and financial burdens for individuals whose predicted mortality raises red flags.
The workplace may witness discriminatory practices, especially against women. Employers may hesitate to hire women of childbearing age, fearing potential maternity leaves and associated cost, oand if predictive algorithms are used to project an individual’s career trajectory or performance, employers may make biased decisions.
In politics it may be used to hinder opportunities or manipulate certain groups and in the financial sector, it may influence loan approvals and interest rates. Furthermore, predictions related to educational outcomes could inadvertently shape opportunities, with individuals being judged not on their potential but on algorithmically determined propensities.
Equivalently invasive and powerful machine-learning tools are likely already out there. Big Data companies and governments have been building profiles on the population for years. Some of these tools even verge on the dystopian concept of “precrime” laid out in Philip K. Dick’s 1956 novella The Minority Report and it would be naive to think that this is not already happening to some degree behind closed doors.
The ethical challenges surrounding Life2vec emphasise the urgent need for comprehensive data privacy laws to safeguard individuals from potential abuses. The absence of such laws in the United States and many other countries, highlights the vulnerability of personal information in the age of advanced AI. As we navigate the uncharted waters of AI development, it is imperative to strike a balance between technological innovation and ethical responsibility.
Finally, it is worth pondering the broader implications of a society that runs on automatic predictions. Most people operate on predictable patterns, influenced by routines, societal norms, and data-driven algorithms. The question arises: Can practicing a more deliberate and conscious way of life alter the course of predictability? The very core of unpredictability is free will. The question is how free are your decisions? Mathematician Albert-László Barabási, who found (by tracing 50.000 cell phones) that 93% of our movements are predictable, cast a dubious shadow on our independence. Perhaps embracing a lifestyle that challenges the automatic nature of our existence may lead to unpredictable outcomes.
Taking personal responsibility, practicing critical thinking, exposing ourselves to novelty and being open to constant change, may save us from ourselves. As we grapple with the ethical dimensions of predictive AI, it is absolutely essential to reflect on the ways in which we engage with the world and whether a more intentional approach can shape a future less bound by automated predictions and algorithms.
Image by Firefly