For every ten tech articles published these days, it seems that AI makes up the majority. In fact, everything from robotics to machine learning to the ever-growing realm of data, have ties to the future of artificial intelligence.

Now, for those of you who read my blogs regularly—two people aside from my Mom—this is the part where you probably expect me to delve into some post-Apocalyptic rhetoric, drawing conclusions around AI that lead to a Skynet-fueled demise. And though I’d love to rant on about my true feelings when it comes to AI, this is more about the security and future of how this new and exciting (scary) technology will come into its full potential.

First, as it regards AI there is a new development that has recently come into play. For those of you who keep apprised of world news, European data management laws are about to change in a significant way. As of May 2018, the General Data Protection Regulation (GDPR) will enact laws for individuals regarding when and how their personal data is accessed, used, and shared within and outside of European borders.

Now, as much as writing an article on the legal terms and conditions of international data law would be thrilling—I’d prefer to chew tinfoil for a day—let’s get to the meat of the subject: AI and security.

Over the past few months, a fear has surfaced surrounding a potential interpretation of the new GDPR data management law: the need to adhere to something called a “right to explanation” when companies use machine learning algorithms. The GDPR states that data controllers must notify consumers how their data will be used, including “the existence of automated decision-making, and, at least in those cases, meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing for the data subject.”

Now, as much as that may sound convoluted and far too legal to care about, we must also consider the potential impact of the vagueness and ambiguity of the regulation. In many cases, the new regulation can be interpreted to address issues surrounding such practices as machine-based decision making and its impact on individuals.

For instance, if a student applies to a university and a machine is responsible for admission, and if the application is rejected by the machine with no human interaction, then should the university not owe that person an explanation? After all, there are exceptions to every rule, which leads us to the question: Can machines factor in the human equation of common sense?

But far beyond that, with IT security continually evolving along with the need to rely on machine learning and AI, it has become critical to combat all of the nefarious IT attacks and attackers—making university submissions pale in comparison to real-time threats.

With the advent of the Internet-of-Things (IoT), social media, data growth, and mobility, our world is entrenched in digital transformation with AI the clear path to security management. However, if there is a new requirement to explain how the AI in question is managing and making decisions about data, can that explanation hinder, or perhaps even triple, our collective global IT defenses?

With the constant threat of IT attacks hovering over all of us 24/7/365—let’s not forget the attacks in the Ukraine this past year that virtually shut down the country—we must ask ourselves the great philosophical question of personal information versus security—and, to be honest, I haven’t come to a conclusion either way. Of course, the reaction of some may very well be the mantra of “security over personal data,” while others might feel the opposite: “live free or die.”

The challenge here is that neither is necessarily the correct course of action. As we continue to build our lives around highly complicated technology, our approach to solving such issues will require an equal amount of complication. And though we strive for a future where we seemingly want to let machines make all our decisions for us, perhaps taking a step back and debating the ins and outs of legal outcomes should be left to us humans—no matter how much tinfoil we all need to chew.