Machine learning researcher Irina Nicolae is here to dispel a common misconception: You don’t have to be a math whiz to end up working in technology.
Growing up in Bucharest, Romania, Irina had relatively little interest in numerics. She was, however, captivated by machinery and how different parts fit together to perform a task. It was this fascination that eventually led her to programming.
Today, Irina is turning her longtime passion into action in her role as a research scientist at IBM Security. She is studying one of today’s most pressing cybersecurity problems: What can be done to keep artificial intelligence (AI) safe from attacks? (And she still gets excited to see the models at work.)
Turning Theoretical Concepts Into Practical Applications
Although Irina only graduated five years ago, she has found herself at the forefront of IBM’s efforts to battle adversarial AI threats. After studying computer science and engineering in her native Bucharest and at the National School of Computer Science and Applied Mathematics in France, she joined the IBM Research team in Dublin to dive headfirst into the most cutting-edge security technology.
Her personal interests range from adversarial AI to Mahalanobis distance and Rademacher complexity (which she researched for her Ph.D.). So, it’s not surprising to hear her say she would have stayed in academia had she not brought her research skills to the corporate world.
At IBM, Irina gets to see her research applied to real-world technology — and she loves that her work is guided by practical applications instead of theoretics.
“To me, it’s the relevance to the modern world,” she said of her role. “On the one hand, it’s a very interesting research problem because we don’t have the full answer. The problem itself has some very interesting properties that make it challenging and fun to analyze.
“On the other hand, to me, it has huge practical impact because, so far, we haven’t seen so many AIs out there — but we’re seeing more and more of them today. As soon as more decision processes are based on these AIs, of course, people are going to try to attack them for profit.”
AI Research: The Importance of Vulnerabilities
For Irina, researching the vulnerabilities in AI and machine learning is crucial. To demonstrate why, she raised the example of neural networks.
“We’ve known about neural networks for the last 30 years, but they were forgotten for a while by the community because they weren’t performing well enough, and have only regained traction in recent years,” Irina explained. “Now, imagine if we couldn’t use AI and deep learning in applications because of security vulnerabilities — if people said this technology has amazing performance, but it’s unreliable because it can always be attacked. To me, there’s this risk of AI, deep learning and machine learning being forgotten again by the community because they are unreliable or, even worse, being used in spite of the risks.”
That’s why Irina is in Dublin, working within a team of five to probe vulnerabilities in AI and machine learning so that we can all use it safely. The same security concerns that affect any other computer-based system also apply to AI, Irina said.
To protect against these threats, security teams need insights specific to the medium at hand. While Irina said this is a “very active research field,” she also noted that researchers have thus far been more successful in attacking AI to exploit vulnerabilities rather than defend them effectively.
Building Defenses Against the Unknown
The next step is building defenses.
“The problem currently is none of the existing defense methods actually solve the problem. All of them are partial answers. Most will only work in certain conditions, against certain attacks, only if the attack is not too strong, only if the attacker doesn’t have full access to the system, etc.” Irina explained. “What we’re looking into is to solve the problem of what would be a good defense for AI against all types of attacks. We want to remove the vulnerabilities that we’re aware of and build a defense against the still-unknown ones.”
Naturally, Irina wants to see AI and machine learning succeed so they can become a bigger part of our daily lives and free security teams to focus on more pressing tasks and big-picture strategies. It plays to her long-time interest in machinery and how it’s all put together.
As she continues her research, Irina gets to indulge her love of complex problems and take satisfaction in the fact that was once a childhood fascination is today helping make the modern world a safer place to live.
The post How a Fascination With Machinery Led Irina Nicolae to AI Research appeared first on Security Intelligence.
This post appeared first on Security Intelligence
Author: Security Intelligence Staff