Almost every day we hear about AI capabilities and the data privacy compact for the era of big data affecting our lives. The repercussions for economy and workforce are quite profound. Businesses use Artificial Intelligence today for a lot of different tasks, for instance marketers promoting a product and a financial industry processing the credit applications, accompanied by new applications like health diagnosis.
Big data and artificial intelligence demand a new privacy compact
Big data contributes a major part of these advancements. Massive data-sets are being constantly and rapidly being added to collections, which often made up of personal data. Along with the basic information of users and demographics, corporations also want to know everything about their personal and shopping habits.
This all starts with the collection of data with users explicitly entrusting the companies which collects data from online transactions and other cache data users leave behind through daily usage of communications with passing through the sensors. By means of smartphones and social media expanding the digital footprints with a gigantic speed, this task of easy collection of user’s private data is getting easy and convenient every day.
The usage of personal data has some evident data privacy implications, especially with PII (personally identifiable information) to be used for identifying an individual, may be sensitive. One common, prevalent fear is always the individuals fearing about the personal data studding its way over the online network forever, or the reason that it cannot be rectified. Since 1995, government officials started to act, with the EU leading these policy efforts while proposing citizens data privacy rights.
Also, this EU General Data Protection Regulation (GDPR) has been set into force, to strengthen the part of consenting the processing of personal data, adding the digital rights for users, focusing on the ways an organization should construct the data protection and privacy processes.
Well, to be honest, there is an evident tension between the unquestionable need for data privacy at the same time when there is a need for usage of data for AI applications, as there are several data privacy harms existing predicted by Artificial Intelligence. In fact, all the individual pieces in personal data puzzle can be easily covered under citizen’s consent; but with all the personal data being fed to the algorithms of Artificial Intelligence, can produce some new and sensitive PII. In a famous article NYT talks about Target predicted pregnancy clearly showing how such purchase of personal data can be mined for different patterns revealing particular sensitive data about people. Such insinuations may cause real harm, even if they are correct or not. Correcting the inference when someone has any health issue can automatically affect the health insurance or employment; while any incorrect interpretation about the woman’s pregnancy can lead to discrimination in a job interview.
For the above stated reasons, GDPR introduced the basic principle of fairness and transparency. Before the processing of any data, privacy notice should be given to all the citizens for ensuring that they have a complete knowledge about every next step. In some cases, privacy impacting the assessments are especially requested for identifying or mitigating any privacy risk. Also, with the principle of transparency, over the right to explain after processing i.e. fundamental right to acquire human intrusion for citizens expressing their views, obtaining a description of all the resolutions centered on automated processing, while challenging such decisions.
These new rights also spurred a heated contradiction. A report by Center of Data Innovation states in context of guarding consumer security, GDPR provisions address Artificial Intelligence as slowing down the innovation and research procedures. The report also reckons different aspects of GDPR holding a possibility of having a negative effect on development of Artificial Intelligence in Europe, whose topmost options refer to the following:
This argues about more variables representing an AI algorithm in the model, and some more complex links, it also become quite hard for humans the assessment of how any algorithm turns out to be at a certain point.
Concluding all this, justifying the predicted data privacy harm is a multi-faceted, complex problem that is suspected to stick around for upcoming several years. GDPR takes care of everything as it regulates the biggest economic region. But there are other approaches existing to cast a great impact, from the documentation of model-building decisions to the creation because of process rights. Talking of the technology realm, explicable algorithms are the ripe for Artificial Intelligence and Big Data innovations.
Well, understanding all this, with the efforts getting matured, GDPR may well make the constraint for technology-neutral instead of necessarily including humans. On the contrary, GDPR is right in its own way for sticking to fairness, both on moral grounds, and for the profitability of business: there is an individual business case for the development of approaching trust on the basis of fairness and transparency. A recent study stated that people easily accept any potential intrusive uses of the private data, like predicting their behavior, in response to services like Google Now, where we need to understand that along with values, trust is also very important. Conversely, another study expresses how people’s trust in different corporations handling their private data is gradually eroding away. With a lot happening, Organizations need to make a strict stand on seizing the opportunity for retrieving people’s trust and handle their data with transparency and fairness.