Customer Voices: How Mercari’s Security Team is Building Guardrails for the AI Era
Customer Voices: How Mercari’s Security Team is Building Guardrails for the AI Era
Customer Voices: How Mercari’s Security Team is Building Guardrails for the AI Era
Customer Voices: How Mercari’s Security Team is Building Guardrails for the AI Era
Customer Voices: How Mercari’s Security Team is Building Guardrails for the AI Era
Customer Voices: How Mercari’s Security Team is Building Guardrails for the AI Era
This edition of Opal's Customer Voices series features Jason Fernandes, VP of Security and Privacy at Mercari, on how his team is adapting security for the AI era.
This edition of Opal's Customer Voices series features Jason Fernandes, VP of Security and Privacy at Mercari, on how his team is adapting security for the AI era.
Q&A with Jason Fernandes, VP of Security & Privacy at Mercari
Opal’s Customer Voices series spotlights leaders from across our customer base who are solving modern security challenges. For this edition, we spoke with Jason Fernandes, VP of Security and Privacy at Mercari, about how his team is adapting Mercari’s security model for the AI era. From shaping new guardrails to standing up a dedicated AI Security function, Jason shares what it takes to keep innovation secure as technology and risk evolve.
Why AI Security—and Why Now
Q: When did AI security first become a dedicated focus at Mercari, and what prompted it?
A: AI has always been something that we have looked at in various implementations across our product. But, it was this year, with the broader uptake of LLMs and related AI technology across Mercari, including the company putting forward its ‘AI Native’ direction, that spurred us to move AI from a topic for each functional team to a dedicated AI Security function itself.
Q: Was there a specific moment when it became clear AI risk needed its own focus?
A: I think our team was a little ahead of the curve in Japan, as we follow trends in tech and security in the U.S. closely. As such, we were able to prepare and lay the groundwork before the company declared its AI-native direction. Our team saw where U.S. companies in general were going and were able to consider the necessary structures to put in place.
Q: What kinds of risks or challenges surfaced first?
A: Many of the risks and challenges we have been handling align with industry standards like the OWASP Top Ten for AI/LLM. However, with the rapid increase of agentic AI—and, particularly, use of LLMs—many of the challenges we face fall on the authentication, authorization, and auditing layer.
We have been trying to tackle issues such as ensuring:
Uptake doesn’t lead to over-prevalence of API keys over keyless authentication options
Authorization can be correctly validated to avoid issues like the confused-deputy problem
We can manage supply chain risk by controlling the tools connected to AI solutions
We can meet the integrity and accountability requirements for products we have in highly regulated industries
Building Mercari’s AI Security Function
Q: How did the dedicated AI Security team come together?
A: Allan Wirth, the manager of our Platform Security team, suggested the idea. Initially, I was hesitant as I believed our function-focused approach was an easier way to organize teams, and I didn’t want to increase the number of teams unnecessarily. However, given the growing internal focus on AI security, I came around to Allan’s proposal and decided it would be better to handle this as a dedicated function.
Q: What skills or backgrounds did you look for when assembling the team?
A: Our AI Security team is an all-star dream team. Since AI impacts all security practices, the team consists of a player from each of our other functional security teams. We also looped in our Privacy team to ensure we could stay in sync with them, and we recently included other functional second-line teams through a new AI governance structure to further avoid a siloed approach to tackling AI risk.
Q: How do you balance security guardrails with enabling teams to innovate quickly with AI?
A: Setting AI aside, our team’s priority has always been to build in sufficient security controls—but with an internal customer-facing and UX-friendly approach. Our current team mission is to 'Empower growth and innovation through adaptable and proactive Security & Privacy,' and our team vision is ‘Make the secure and privacy first path the easiest path.’ While making these a reality can be challenging, both tenets help ensure balance and that our controls are smooth and effective.
Defining “AI-native” Security in Practice
Q: AI security can mean many different things. How do you define it at Mercari?
A: AI security is, indeed, very broad. To make it clearer and easier to manage, we split it into two categories—AI in our products and AI in our enterprise—and have been working on various initiatives to tackle the challenge of securing AI uptake across both areas.
Q: What does that look like in your environment today?
A: We have dedicated channels and a dedicated team for AI-related consultations, which makes things straightforward for our internal customers. We handle everything from agent-builder, AI automation, and MCP server consultations to more general design reviews on our current products. In addition to ‘Security for AI,’ we are also looking at ‘AI for Security’ and building AI into some of our own workflows and initiatives within the Security division.
Q: Are there frameworks or best practices that have helped guide your approach?
A: For AI security specifically, thus far we have focused on risk taxonomies such as the OWASP AI/LLM Top Ten, as well as more practical frameworks such as Google’s SAIF. In my role in AI Governance, I also closely follow trends and changes in the NIST AI RMF, ISO 42001, EU AI Act, and other frameworks, as well as emerging discussions at regulatory agencies in Japan.
Q: You mentioned challenges around authentication and authorization for agentic AI; what’s something that’s worked well or taught you something new?
A: Where possible, we provide our own gateways for LLM usage and MCP servers to enable centralized management. We also closely review internal projects and implementations to ensure we can tackle issues like 'confused deputy' problems, which break down authorization layers.
Rethinking Identity and Access for Agentic AI
Q: As AI systems become more autonomous, how do you see identity and access governance evolving?
A: As agentic AI develops further and demand from the business to grant significant autonomy to agents grows, agents will need to be treated increasingly as identities. It will become important to visualize the data and the functions agents handle and to be able to deploy circuit breakers, as well as enforce least agency, identify misalignment, and be able to isolate and handle rogue agents.
Many companies already struggle with identity and access management for their employees. Agentic identity will compound the challenge. I hope to see more solutions coming forward that can make it easier to manage this at scale.
Q: Do you think we’ll need a formal taxonomy for different types of agents—coding, orchestration, customer-facing, and so on—as this technology matures?
A: As it matures, it will likely be integrated more closely into existing frameworks and taxonomies to cover it as just another type of technology with some unique caveats that need consideration.
Looking Ahead at Mercari
Q: What’s next for your team as AI adoption deepens across Mercari?
A: In addition to AI Security, we’re now building out an AI Governance function. With these two teams working together, a priority will be to utilize automated inventories of AI usage in our environment to build out better AI management systems. Similar to our security policies, our current goal is to make it as clear as possible for people to understand what they should / should not do. This will allow innovation within clear guardrails that push people to build more secure and AI risk-aware products.
Q: What advice would you give to other leaders just beginning to formalize their AI security programs?
A: Many of the fundamentals of AI security do not differ from what many leaders are already doing in their security programs. However, the increased interest and rapid uptake of agentic AI and LLMs require that we drive these fundamentals at scale. Don’t be afraid to run in parallel and use the AI-native wave as your own tailwind to push overall improvements in guardrails for security.
Q: Is there anything else you’d like to share about how Mercari thinks about secure AI enablement or the broader evolution of security in this space?
A: AI security keeps changing, and it's important to remain flexible and adaptive. We have a lot of initiatives in this area now, and while most are covering fundamentals, these might change dramatically as the technology evolves. Ultimately, it comes down to our mission to make building secure products and securing the enterprise as end-user friendly and easy as possible. For us, it's about taking this approach and talking openly and honestly with internal stakeholders.
Final Note
Q: What’s one assumption or buzzword around AI security you wish people would drop—something that gets in the way of real progress?
A: I think there is a lot of focus on securing 'around' AI, or integrating AI in security to defend against threat actors also using AI. While these are important topics too, I think there could be more discussion on ‘Secure AI Products’ to help drive progress in building security into AI solutions themselves, so that we have fewer ‘non-default’ defenses to build around the technology itself. So I don’t think there are buzzwords we should drop, but I think ‘Secure AI Product’ is one we should try and trend more.
This edition of Opal's Customer Voices series features Jason Fernandes, VP of Security and Privacy at Mercari, on how his team is adapting security for the AI era.
Q&A with Jason Fernandes, VP of Security & Privacy at Mercari
Opal’s Customer Voices series spotlights leaders from across our customer base who are solving modern security challenges. For this edition, we spoke with Jason Fernandes, VP of Security and Privacy at Mercari, about how his team is adapting Mercari’s security model for the AI era. From shaping new guardrails to standing up a dedicated AI Security function, Jason shares what it takes to keep innovation secure as technology and risk evolve.
Why AI Security—and Why Now
Q: When did AI security first become a dedicated focus at Mercari, and what prompted it?
A: AI has always been something that we have looked at in various implementations across our product. But, it was this year, with the broader uptake of LLMs and related AI technology across Mercari, including the company putting forward its ‘AI Native’ direction, that spurred us to move AI from a topic for each functional team to a dedicated AI Security function itself.
Q: Was there a specific moment when it became clear AI risk needed its own focus?
A: I think our team was a little ahead of the curve in Japan, as we follow trends in tech and security in the U.S. closely. As such, we were able to prepare and lay the groundwork before the company declared its AI-native direction. Our team saw where U.S. companies in general were going and were able to consider the necessary structures to put in place.
Q: What kinds of risks or challenges surfaced first?
A: Many of the risks and challenges we have been handling align with industry standards like the OWASP Top Ten for AI/LLM. However, with the rapid increase of agentic AI—and, particularly, use of LLMs—many of the challenges we face fall on the authentication, authorization, and auditing layer.
We have been trying to tackle issues such as ensuring:
Uptake doesn’t lead to over-prevalence of API keys over keyless authentication options
Authorization can be correctly validated to avoid issues like the confused-deputy problem
We can manage supply chain risk by controlling the tools connected to AI solutions
We can meet the integrity and accountability requirements for products we have in highly regulated industries
Building Mercari’s AI Security Function
Q: How did the dedicated AI Security team come together?
A: Allan Wirth, the manager of our Platform Security team, suggested the idea. Initially, I was hesitant as I believed our function-focused approach was an easier way to organize teams, and I didn’t want to increase the number of teams unnecessarily. However, given the growing internal focus on AI security, I came around to Allan’s proposal and decided it would be better to handle this as a dedicated function.
Q: What skills or backgrounds did you look for when assembling the team?
A: Our AI Security team is an all-star dream team. Since AI impacts all security practices, the team consists of a player from each of our other functional security teams. We also looped in our Privacy team to ensure we could stay in sync with them, and we recently included other functional second-line teams through a new AI governance structure to further avoid a siloed approach to tackling AI risk.
Q: How do you balance security guardrails with enabling teams to innovate quickly with AI?
A: Setting AI aside, our team’s priority has always been to build in sufficient security controls—but with an internal customer-facing and UX-friendly approach. Our current team mission is to 'Empower growth and innovation through adaptable and proactive Security & Privacy,' and our team vision is ‘Make the secure and privacy first path the easiest path.’ While making these a reality can be challenging, both tenets help ensure balance and that our controls are smooth and effective.
Defining “AI-native” Security in Practice
Q: AI security can mean many different things. How do you define it at Mercari?
A: AI security is, indeed, very broad. To make it clearer and easier to manage, we split it into two categories—AI in our products and AI in our enterprise—and have been working on various initiatives to tackle the challenge of securing AI uptake across both areas.
Q: What does that look like in your environment today?
A: We have dedicated channels and a dedicated team for AI-related consultations, which makes things straightforward for our internal customers. We handle everything from agent-builder, AI automation, and MCP server consultations to more general design reviews on our current products. In addition to ‘Security for AI,’ we are also looking at ‘AI for Security’ and building AI into some of our own workflows and initiatives within the Security division.
Q: Are there frameworks or best practices that have helped guide your approach?
A: For AI security specifically, thus far we have focused on risk taxonomies such as the OWASP AI/LLM Top Ten, as well as more practical frameworks such as Google’s SAIF. In my role in AI Governance, I also closely follow trends and changes in the NIST AI RMF, ISO 42001, EU AI Act, and other frameworks, as well as emerging discussions at regulatory agencies in Japan.
Q: You mentioned challenges around authentication and authorization for agentic AI; what’s something that’s worked well or taught you something new?
A: Where possible, we provide our own gateways for LLM usage and MCP servers to enable centralized management. We also closely review internal projects and implementations to ensure we can tackle issues like 'confused deputy' problems, which break down authorization layers.
Rethinking Identity and Access for Agentic AI
Q: As AI systems become more autonomous, how do you see identity and access governance evolving?
A: As agentic AI develops further and demand from the business to grant significant autonomy to agents grows, agents will need to be treated increasingly as identities. It will become important to visualize the data and the functions agents handle and to be able to deploy circuit breakers, as well as enforce least agency, identify misalignment, and be able to isolate and handle rogue agents.
Many companies already struggle with identity and access management for their employees. Agentic identity will compound the challenge. I hope to see more solutions coming forward that can make it easier to manage this at scale.
Q: Do you think we’ll need a formal taxonomy for different types of agents—coding, orchestration, customer-facing, and so on—as this technology matures?
A: As it matures, it will likely be integrated more closely into existing frameworks and taxonomies to cover it as just another type of technology with some unique caveats that need consideration.
Looking Ahead at Mercari
Q: What’s next for your team as AI adoption deepens across Mercari?
A: In addition to AI Security, we’re now building out an AI Governance function. With these two teams working together, a priority will be to utilize automated inventories of AI usage in our environment to build out better AI management systems. Similar to our security policies, our current goal is to make it as clear as possible for people to understand what they should / should not do. This will allow innovation within clear guardrails that push people to build more secure and AI risk-aware products.
Q: What advice would you give to other leaders just beginning to formalize their AI security programs?
A: Many of the fundamentals of AI security do not differ from what many leaders are already doing in their security programs. However, the increased interest and rapid uptake of agentic AI and LLMs require that we drive these fundamentals at scale. Don’t be afraid to run in parallel and use the AI-native wave as your own tailwind to push overall improvements in guardrails for security.
Q: Is there anything else you’d like to share about how Mercari thinks about secure AI enablement or the broader evolution of security in this space?
A: AI security keeps changing, and it's important to remain flexible and adaptive. We have a lot of initiatives in this area now, and while most are covering fundamentals, these might change dramatically as the technology evolves. Ultimately, it comes down to our mission to make building secure products and securing the enterprise as end-user friendly and easy as possible. For us, it's about taking this approach and talking openly and honestly with internal stakeholders.
Final Note
Q: What’s one assumption or buzzword around AI security you wish people would drop—something that gets in the way of real progress?
A: I think there is a lot of focus on securing 'around' AI, or integrating AI in security to defend against threat actors also using AI. While these are important topics too, I think there could be more discussion on ‘Secure AI Products’ to help drive progress in building security into AI solutions themselves, so that we have fewer ‘non-default’ defenses to build around the technology itself. So I don’t think there are buzzwords we should drop, but I think ‘Secure AI Product’ is one we should try and trend more.
This edition of Opal's Customer Voices series features Jason Fernandes, VP of Security and Privacy at Mercari, on how his team is adapting security for the AI era.
Q&A with Jason Fernandes, VP of Security & Privacy at Mercari
Opal’s Customer Voices series spotlights leaders from across our customer base who are solving modern security challenges. For this edition, we spoke with Jason Fernandes, VP of Security and Privacy at Mercari, about how his team is adapting Mercari’s security model for the AI era. From shaping new guardrails to standing up a dedicated AI Security function, Jason shares what it takes to keep innovation secure as technology and risk evolve.
Why AI Security—and Why Now
Q: When did AI security first become a dedicated focus at Mercari, and what prompted it?
A: AI has always been something that we have looked at in various implementations across our product. But, it was this year, with the broader uptake of LLMs and related AI technology across Mercari, including the company putting forward its ‘AI Native’ direction, that spurred us to move AI from a topic for each functional team to a dedicated AI Security function itself.
Q: Was there a specific moment when it became clear AI risk needed its own focus?
A: I think our team was a little ahead of the curve in Japan, as we follow trends in tech and security in the U.S. closely. As such, we were able to prepare and lay the groundwork before the company declared its AI-native direction. Our team saw where U.S. companies in general were going and were able to consider the necessary structures to put in place.
Q: What kinds of risks or challenges surfaced first?
A: Many of the risks and challenges we have been handling align with industry standards like the OWASP Top Ten for AI/LLM. However, with the rapid increase of agentic AI—and, particularly, use of LLMs—many of the challenges we face fall on the authentication, authorization, and auditing layer.
We have been trying to tackle issues such as ensuring:
Uptake doesn’t lead to over-prevalence of API keys over keyless authentication options
Authorization can be correctly validated to avoid issues like the confused-deputy problem
We can manage supply chain risk by controlling the tools connected to AI solutions
We can meet the integrity and accountability requirements for products we have in highly regulated industries
Building Mercari’s AI Security Function
Q: How did the dedicated AI Security team come together?
A: Allan Wirth, the manager of our Platform Security team, suggested the idea. Initially, I was hesitant as I believed our function-focused approach was an easier way to organize teams, and I didn’t want to increase the number of teams unnecessarily. However, given the growing internal focus on AI security, I came around to Allan’s proposal and decided it would be better to handle this as a dedicated function.
Q: What skills or backgrounds did you look for when assembling the team?
A: Our AI Security team is an all-star dream team. Since AI impacts all security practices, the team consists of a player from each of our other functional security teams. We also looped in our Privacy team to ensure we could stay in sync with them, and we recently included other functional second-line teams through a new AI governance structure to further avoid a siloed approach to tackling AI risk.
Q: How do you balance security guardrails with enabling teams to innovate quickly with AI?
A: Setting AI aside, our team’s priority has always been to build in sufficient security controls—but with an internal customer-facing and UX-friendly approach. Our current team mission is to 'Empower growth and innovation through adaptable and proactive Security & Privacy,' and our team vision is ‘Make the secure and privacy first path the easiest path.’ While making these a reality can be challenging, both tenets help ensure balance and that our controls are smooth and effective.
Defining “AI-native” Security in Practice
Q: AI security can mean many different things. How do you define it at Mercari?
A: AI security is, indeed, very broad. To make it clearer and easier to manage, we split it into two categories—AI in our products and AI in our enterprise—and have been working on various initiatives to tackle the challenge of securing AI uptake across both areas.
Q: What does that look like in your environment today?
A: We have dedicated channels and a dedicated team for AI-related consultations, which makes things straightforward for our internal customers. We handle everything from agent-builder, AI automation, and MCP server consultations to more general design reviews on our current products. In addition to ‘Security for AI,’ we are also looking at ‘AI for Security’ and building AI into some of our own workflows and initiatives within the Security division.
Q: Are there frameworks or best practices that have helped guide your approach?
A: For AI security specifically, thus far we have focused on risk taxonomies such as the OWASP AI/LLM Top Ten, as well as more practical frameworks such as Google’s SAIF. In my role in AI Governance, I also closely follow trends and changes in the NIST AI RMF, ISO 42001, EU AI Act, and other frameworks, as well as emerging discussions at regulatory agencies in Japan.
Q: You mentioned challenges around authentication and authorization for agentic AI; what’s something that’s worked well or taught you something new?
A: Where possible, we provide our own gateways for LLM usage and MCP servers to enable centralized management. We also closely review internal projects and implementations to ensure we can tackle issues like 'confused deputy' problems, which break down authorization layers.
Rethinking Identity and Access for Agentic AI
Q: As AI systems become more autonomous, how do you see identity and access governance evolving?
A: As agentic AI develops further and demand from the business to grant significant autonomy to agents grows, agents will need to be treated increasingly as identities. It will become important to visualize the data and the functions agents handle and to be able to deploy circuit breakers, as well as enforce least agency, identify misalignment, and be able to isolate and handle rogue agents.
Many companies already struggle with identity and access management for their employees. Agentic identity will compound the challenge. I hope to see more solutions coming forward that can make it easier to manage this at scale.
Q: Do you think we’ll need a formal taxonomy for different types of agents—coding, orchestration, customer-facing, and so on—as this technology matures?
A: As it matures, it will likely be integrated more closely into existing frameworks and taxonomies to cover it as just another type of technology with some unique caveats that need consideration.
Looking Ahead at Mercari
Q: What’s next for your team as AI adoption deepens across Mercari?
A: In addition to AI Security, we’re now building out an AI Governance function. With these two teams working together, a priority will be to utilize automated inventories of AI usage in our environment to build out better AI management systems. Similar to our security policies, our current goal is to make it as clear as possible for people to understand what they should / should not do. This will allow innovation within clear guardrails that push people to build more secure and AI risk-aware products.
Q: What advice would you give to other leaders just beginning to formalize their AI security programs?
A: Many of the fundamentals of AI security do not differ from what many leaders are already doing in their security programs. However, the increased interest and rapid uptake of agentic AI and LLMs require that we drive these fundamentals at scale. Don’t be afraid to run in parallel and use the AI-native wave as your own tailwind to push overall improvements in guardrails for security.
Q: Is there anything else you’d like to share about how Mercari thinks about secure AI enablement or the broader evolution of security in this space?
A: AI security keeps changing, and it's important to remain flexible and adaptive. We have a lot of initiatives in this area now, and while most are covering fundamentals, these might change dramatically as the technology evolves. Ultimately, it comes down to our mission to make building secure products and securing the enterprise as end-user friendly and easy as possible. For us, it's about taking this approach and talking openly and honestly with internal stakeholders.
Final Note
Q: What’s one assumption or buzzword around AI security you wish people would drop—something that gets in the way of real progress?
A: I think there is a lot of focus on securing 'around' AI, or integrating AI in security to defend against threat actors also using AI. While these are important topics too, I think there could be more discussion on ‘Secure AI Products’ to help drive progress in building security into AI solutions themselves, so that we have fewer ‘non-default’ defenses to build around the technology itself. So I don’t think there are buzzwords we should drop, but I think ‘Secure AI Product’ is one we should try and trend more.
This edition of Opal's Customer Voices series features Jason Fernandes, VP of Security and Privacy at Mercari, on how his team is adapting security for the AI era.
Q&A with Jason Fernandes, VP of Security & Privacy at Mercari
Opal’s Customer Voices series spotlights leaders from across our customer base who are solving modern security challenges. For this edition, we spoke with Jason Fernandes, VP of Security and Privacy at Mercari, about how his team is adapting Mercari’s security model for the AI era. From shaping new guardrails to standing up a dedicated AI Security function, Jason shares what it takes to keep innovation secure as technology and risk evolve.
Why AI Security—and Why Now
Q: When did AI security first become a dedicated focus at Mercari, and what prompted it?
A: AI has always been something that we have looked at in various implementations across our product. But, it was this year, with the broader uptake of LLMs and related AI technology across Mercari, including the company putting forward its ‘AI Native’ direction, that spurred us to move AI from a topic for each functional team to a dedicated AI Security function itself.
Q: Was there a specific moment when it became clear AI risk needed its own focus?
A: I think our team was a little ahead of the curve in Japan, as we follow trends in tech and security in the U.S. closely. As such, we were able to prepare and lay the groundwork before the company declared its AI-native direction. Our team saw where U.S. companies in general were going and were able to consider the necessary structures to put in place.
Q: What kinds of risks or challenges surfaced first?
A: Many of the risks and challenges we have been handling align with industry standards like the OWASP Top Ten for AI/LLM. However, with the rapid increase of agentic AI—and, particularly, use of LLMs—many of the challenges we face fall on the authentication, authorization, and auditing layer.
We have been trying to tackle issues such as ensuring:
Uptake doesn’t lead to over-prevalence of API keys over keyless authentication options
Authorization can be correctly validated to avoid issues like the confused-deputy problem
We can manage supply chain risk by controlling the tools connected to AI solutions
We can meet the integrity and accountability requirements for products we have in highly regulated industries
Building Mercari’s AI Security Function
Q: How did the dedicated AI Security team come together?
A: Allan Wirth, the manager of our Platform Security team, suggested the idea. Initially, I was hesitant as I believed our function-focused approach was an easier way to organize teams, and I didn’t want to increase the number of teams unnecessarily. However, given the growing internal focus on AI security, I came around to Allan’s proposal and decided it would be better to handle this as a dedicated function.
Q: What skills or backgrounds did you look for when assembling the team?
A: Our AI Security team is an all-star dream team. Since AI impacts all security practices, the team consists of a player from each of our other functional security teams. We also looped in our Privacy team to ensure we could stay in sync with them, and we recently included other functional second-line teams through a new AI governance structure to further avoid a siloed approach to tackling AI risk.
Q: How do you balance security guardrails with enabling teams to innovate quickly with AI?
A: Setting AI aside, our team’s priority has always been to build in sufficient security controls—but with an internal customer-facing and UX-friendly approach. Our current team mission is to 'Empower growth and innovation through adaptable and proactive Security & Privacy,' and our team vision is ‘Make the secure and privacy first path the easiest path.’ While making these a reality can be challenging, both tenets help ensure balance and that our controls are smooth and effective.
Defining “AI-native” Security in Practice
Q: AI security can mean many different things. How do you define it at Mercari?
A: AI security is, indeed, very broad. To make it clearer and easier to manage, we split it into two categories—AI in our products and AI in our enterprise—and have been working on various initiatives to tackle the challenge of securing AI uptake across both areas.
Q: What does that look like in your environment today?
A: We have dedicated channels and a dedicated team for AI-related consultations, which makes things straightforward for our internal customers. We handle everything from agent-builder, AI automation, and MCP server consultations to more general design reviews on our current products. In addition to ‘Security for AI,’ we are also looking at ‘AI for Security’ and building AI into some of our own workflows and initiatives within the Security division.
Q: Are there frameworks or best practices that have helped guide your approach?
A: For AI security specifically, thus far we have focused on risk taxonomies such as the OWASP AI/LLM Top Ten, as well as more practical frameworks such as Google’s SAIF. In my role in AI Governance, I also closely follow trends and changes in the NIST AI RMF, ISO 42001, EU AI Act, and other frameworks, as well as emerging discussions at regulatory agencies in Japan.
Q: You mentioned challenges around authentication and authorization for agentic AI; what’s something that’s worked well or taught you something new?
A: Where possible, we provide our own gateways for LLM usage and MCP servers to enable centralized management. We also closely review internal projects and implementations to ensure we can tackle issues like 'confused deputy' problems, which break down authorization layers.
Rethinking Identity and Access for Agentic AI
Q: As AI systems become more autonomous, how do you see identity and access governance evolving?
A: As agentic AI develops further and demand from the business to grant significant autonomy to agents grows, agents will need to be treated increasingly as identities. It will become important to visualize the data and the functions agents handle and to be able to deploy circuit breakers, as well as enforce least agency, identify misalignment, and be able to isolate and handle rogue agents.
Many companies already struggle with identity and access management for their employees. Agentic identity will compound the challenge. I hope to see more solutions coming forward that can make it easier to manage this at scale.
Q: Do you think we’ll need a formal taxonomy for different types of agents—coding, orchestration, customer-facing, and so on—as this technology matures?
A: As it matures, it will likely be integrated more closely into existing frameworks and taxonomies to cover it as just another type of technology with some unique caveats that need consideration.
Looking Ahead at Mercari
Q: What’s next for your team as AI adoption deepens across Mercari?
A: In addition to AI Security, we’re now building out an AI Governance function. With these two teams working together, a priority will be to utilize automated inventories of AI usage in our environment to build out better AI management systems. Similar to our security policies, our current goal is to make it as clear as possible for people to understand what they should / should not do. This will allow innovation within clear guardrails that push people to build more secure and AI risk-aware products.
Q: What advice would you give to other leaders just beginning to formalize their AI security programs?
A: Many of the fundamentals of AI security do not differ from what many leaders are already doing in their security programs. However, the increased interest and rapid uptake of agentic AI and LLMs require that we drive these fundamentals at scale. Don’t be afraid to run in parallel and use the AI-native wave as your own tailwind to push overall improvements in guardrails for security.
Q: Is there anything else you’d like to share about how Mercari thinks about secure AI enablement or the broader evolution of security in this space?
A: AI security keeps changing, and it's important to remain flexible and adaptive. We have a lot of initiatives in this area now, and while most are covering fundamentals, these might change dramatically as the technology evolves. Ultimately, it comes down to our mission to make building secure products and securing the enterprise as end-user friendly and easy as possible. For us, it's about taking this approach and talking openly and honestly with internal stakeholders.
Final Note
Q: What’s one assumption or buzzword around AI security you wish people would drop—something that gets in the way of real progress?
A: I think there is a lot of focus on securing 'around' AI, or integrating AI in security to defend against threat actors also using AI. While these are important topics too, I think there could be more discussion on ‘Secure AI Products’ to help drive progress in building security into AI solutions themselves, so that we have fewer ‘non-default’ defenses to build around the technology itself. So I don’t think there are buzzwords we should drop, but I think ‘Secure AI Product’ is one we should try and trend more.
This edition of Opal's Customer Voices series features Jason Fernandes, VP of Security and Privacy at Mercari, on how his team is adapting security for the AI era.
Q&A with Jason Fernandes, VP of Security & Privacy at Mercari
Opal’s Customer Voices series spotlights leaders from across our customer base who are solving modern security challenges. For this edition, we spoke with Jason Fernandes, VP of Security and Privacy at Mercari, about how his team is adapting Mercari’s security model for the AI era. From shaping new guardrails to standing up a dedicated AI Security function, Jason shares what it takes to keep innovation secure as technology and risk evolve.
Why AI Security—and Why Now
Q: When did AI security first become a dedicated focus at Mercari, and what prompted it?
A: AI has always been something that we have looked at in various implementations across our product. But, it was this year, with the broader uptake of LLMs and related AI technology across Mercari, including the company putting forward its ‘AI Native’ direction, that spurred us to move AI from a topic for each functional team to a dedicated AI Security function itself.
Q: Was there a specific moment when it became clear AI risk needed its own focus?
A: I think our team was a little ahead of the curve in Japan, as we follow trends in tech and security in the U.S. closely. As such, we were able to prepare and lay the groundwork before the company declared its AI-native direction. Our team saw where U.S. companies in general were going and were able to consider the necessary structures to put in place.
Q: What kinds of risks or challenges surfaced first?
A: Many of the risks and challenges we have been handling align with industry standards like the OWASP Top Ten for AI/LLM. However, with the rapid increase of agentic AI—and, particularly, use of LLMs—many of the challenges we face fall on the authentication, authorization, and auditing layer.
We have been trying to tackle issues such as ensuring:
Uptake doesn’t lead to over-prevalence of API keys over keyless authentication options
Authorization can be correctly validated to avoid issues like the confused-deputy problem
We can manage supply chain risk by controlling the tools connected to AI solutions
We can meet the integrity and accountability requirements for products we have in highly regulated industries
Building Mercari’s AI Security Function
Q: How did the dedicated AI Security team come together?
A: Allan Wirth, the manager of our Platform Security team, suggested the idea. Initially, I was hesitant as I believed our function-focused approach was an easier way to organize teams, and I didn’t want to increase the number of teams unnecessarily. However, given the growing internal focus on AI security, I came around to Allan’s proposal and decided it would be better to handle this as a dedicated function.
Q: What skills or backgrounds did you look for when assembling the team?
A: Our AI Security team is an all-star dream team. Since AI impacts all security practices, the team consists of a player from each of our other functional security teams. We also looped in our Privacy team to ensure we could stay in sync with them, and we recently included other functional second-line teams through a new AI governance structure to further avoid a siloed approach to tackling AI risk.
Q: How do you balance security guardrails with enabling teams to innovate quickly with AI?
A: Setting AI aside, our team’s priority has always been to build in sufficient security controls—but with an internal customer-facing and UX-friendly approach. Our current team mission is to 'Empower growth and innovation through adaptable and proactive Security & Privacy,' and our team vision is ‘Make the secure and privacy first path the easiest path.’ While making these a reality can be challenging, both tenets help ensure balance and that our controls are smooth and effective.
Defining “AI-native” Security in Practice
Q: AI security can mean many different things. How do you define it at Mercari?
A: AI security is, indeed, very broad. To make it clearer and easier to manage, we split it into two categories—AI in our products and AI in our enterprise—and have been working on various initiatives to tackle the challenge of securing AI uptake across both areas.
Q: What does that look like in your environment today?
A: We have dedicated channels and a dedicated team for AI-related consultations, which makes things straightforward for our internal customers. We handle everything from agent-builder, AI automation, and MCP server consultations to more general design reviews on our current products. In addition to ‘Security for AI,’ we are also looking at ‘AI for Security’ and building AI into some of our own workflows and initiatives within the Security division.
Q: Are there frameworks or best practices that have helped guide your approach?
A: For AI security specifically, thus far we have focused on risk taxonomies such as the OWASP AI/LLM Top Ten, as well as more practical frameworks such as Google’s SAIF. In my role in AI Governance, I also closely follow trends and changes in the NIST AI RMF, ISO 42001, EU AI Act, and other frameworks, as well as emerging discussions at regulatory agencies in Japan.
Q: You mentioned challenges around authentication and authorization for agentic AI; what’s something that’s worked well or taught you something new?
A: Where possible, we provide our own gateways for LLM usage and MCP servers to enable centralized management. We also closely review internal projects and implementations to ensure we can tackle issues like 'confused deputy' problems, which break down authorization layers.
Rethinking Identity and Access for Agentic AI
Q: As AI systems become more autonomous, how do you see identity and access governance evolving?
A: As agentic AI develops further and demand from the business to grant significant autonomy to agents grows, agents will need to be treated increasingly as identities. It will become important to visualize the data and the functions agents handle and to be able to deploy circuit breakers, as well as enforce least agency, identify misalignment, and be able to isolate and handle rogue agents.
Many companies already struggle with identity and access management for their employees. Agentic identity will compound the challenge. I hope to see more solutions coming forward that can make it easier to manage this at scale.
Q: Do you think we’ll need a formal taxonomy for different types of agents—coding, orchestration, customer-facing, and so on—as this technology matures?
A: As it matures, it will likely be integrated more closely into existing frameworks and taxonomies to cover it as just another type of technology with some unique caveats that need consideration.
Looking Ahead at Mercari
Q: What’s next for your team as AI adoption deepens across Mercari?
A: In addition to AI Security, we’re now building out an AI Governance function. With these two teams working together, a priority will be to utilize automated inventories of AI usage in our environment to build out better AI management systems. Similar to our security policies, our current goal is to make it as clear as possible for people to understand what they should / should not do. This will allow innovation within clear guardrails that push people to build more secure and AI risk-aware products.
Q: What advice would you give to other leaders just beginning to formalize their AI security programs?
A: Many of the fundamentals of AI security do not differ from what many leaders are already doing in their security programs. However, the increased interest and rapid uptake of agentic AI and LLMs require that we drive these fundamentals at scale. Don’t be afraid to run in parallel and use the AI-native wave as your own tailwind to push overall improvements in guardrails for security.
Q: Is there anything else you’d like to share about how Mercari thinks about secure AI enablement or the broader evolution of security in this space?
A: AI security keeps changing, and it's important to remain flexible and adaptive. We have a lot of initiatives in this area now, and while most are covering fundamentals, these might change dramatically as the technology evolves. Ultimately, it comes down to our mission to make building secure products and securing the enterprise as end-user friendly and easy as possible. For us, it's about taking this approach and talking openly and honestly with internal stakeholders.
Final Note
Q: What’s one assumption or buzzword around AI security you wish people would drop—something that gets in the way of real progress?
A: I think there is a lot of focus on securing 'around' AI, or integrating AI in security to defend against threat actors also using AI. While these are important topics too, I think there could be more discussion on ‘Secure AI Products’ to help drive progress in building security into AI solutions themselves, so that we have fewer ‘non-default’ defenses to build around the technology itself. So I don’t think there are buzzwords we should drop, but I think ‘Secure AI Product’ is one we should try and trend more.
This edition of Opal's Customer Voices series features Jason Fernandes, VP of Security and Privacy at Mercari, on how his team is adapting security for the AI era.
Q&A with Jason Fernandes, VP of Security & Privacy at Mercari
Opal’s Customer Voices series spotlights leaders from across our customer base who are solving modern security challenges. For this edition, we spoke with Jason Fernandes, VP of Security and Privacy at Mercari, about how his team is adapting Mercari’s security model for the AI era. From shaping new guardrails to standing up a dedicated AI Security function, Jason shares what it takes to keep innovation secure as technology and risk evolve.
Why AI Security—and Why Now
Q: When did AI security first become a dedicated focus at Mercari, and what prompted it?
A: AI has always been something that we have looked at in various implementations across our product. But, it was this year, with the broader uptake of LLMs and related AI technology across Mercari, including the company putting forward its ‘AI Native’ direction, that spurred us to move AI from a topic for each functional team to a dedicated AI Security function itself.
Q: Was there a specific moment when it became clear AI risk needed its own focus?
A: I think our team was a little ahead of the curve in Japan, as we follow trends in tech and security in the U.S. closely. As such, we were able to prepare and lay the groundwork before the company declared its AI-native direction. Our team saw where U.S. companies in general were going and were able to consider the necessary structures to put in place.
Q: What kinds of risks or challenges surfaced first?
A: Many of the risks and challenges we have been handling align with industry standards like the OWASP Top Ten for AI/LLM. However, with the rapid increase of agentic AI—and, particularly, use of LLMs—many of the challenges we face fall on the authentication, authorization, and auditing layer.
We have been trying to tackle issues such as ensuring:
Uptake doesn’t lead to over-prevalence of API keys over keyless authentication options
Authorization can be correctly validated to avoid issues like the confused-deputy problem
We can manage supply chain risk by controlling the tools connected to AI solutions
We can meet the integrity and accountability requirements for products we have in highly regulated industries
Building Mercari’s AI Security Function
Q: How did the dedicated AI Security team come together?
A: Allan Wirth, the manager of our Platform Security team, suggested the idea. Initially, I was hesitant as I believed our function-focused approach was an easier way to organize teams, and I didn’t want to increase the number of teams unnecessarily. However, given the growing internal focus on AI security, I came around to Allan’s proposal and decided it would be better to handle this as a dedicated function.
Q: What skills or backgrounds did you look for when assembling the team?
A: Our AI Security team is an all-star dream team. Since AI impacts all security practices, the team consists of a player from each of our other functional security teams. We also looped in our Privacy team to ensure we could stay in sync with them, and we recently included other functional second-line teams through a new AI governance structure to further avoid a siloed approach to tackling AI risk.
Q: How do you balance security guardrails with enabling teams to innovate quickly with AI?
A: Setting AI aside, our team’s priority has always been to build in sufficient security controls—but with an internal customer-facing and UX-friendly approach. Our current team mission is to 'Empower growth and innovation through adaptable and proactive Security & Privacy,' and our team vision is ‘Make the secure and privacy first path the easiest path.’ While making these a reality can be challenging, both tenets help ensure balance and that our controls are smooth and effective.
Defining “AI-native” Security in Practice
Q: AI security can mean many different things. How do you define it at Mercari?
A: AI security is, indeed, very broad. To make it clearer and easier to manage, we split it into two categories—AI in our products and AI in our enterprise—and have been working on various initiatives to tackle the challenge of securing AI uptake across both areas.
Q: What does that look like in your environment today?
A: We have dedicated channels and a dedicated team for AI-related consultations, which makes things straightforward for our internal customers. We handle everything from agent-builder, AI automation, and MCP server consultations to more general design reviews on our current products. In addition to ‘Security for AI,’ we are also looking at ‘AI for Security’ and building AI into some of our own workflows and initiatives within the Security division.
Q: Are there frameworks or best practices that have helped guide your approach?
A: For AI security specifically, thus far we have focused on risk taxonomies such as the OWASP AI/LLM Top Ten, as well as more practical frameworks such as Google’s SAIF. In my role in AI Governance, I also closely follow trends and changes in the NIST AI RMF, ISO 42001, EU AI Act, and other frameworks, as well as emerging discussions at regulatory agencies in Japan.
Q: You mentioned challenges around authentication and authorization for agentic AI; what’s something that’s worked well or taught you something new?
A: Where possible, we provide our own gateways for LLM usage and MCP servers to enable centralized management. We also closely review internal projects and implementations to ensure we can tackle issues like 'confused deputy' problems, which break down authorization layers.
Rethinking Identity and Access for Agentic AI
Q: As AI systems become more autonomous, how do you see identity and access governance evolving?
A: As agentic AI develops further and demand from the business to grant significant autonomy to agents grows, agents will need to be treated increasingly as identities. It will become important to visualize the data and the functions agents handle and to be able to deploy circuit breakers, as well as enforce least agency, identify misalignment, and be able to isolate and handle rogue agents.
Many companies already struggle with identity and access management for their employees. Agentic identity will compound the challenge. I hope to see more solutions coming forward that can make it easier to manage this at scale.
Q: Do you think we’ll need a formal taxonomy for different types of agents—coding, orchestration, customer-facing, and so on—as this technology matures?
A: As it matures, it will likely be integrated more closely into existing frameworks and taxonomies to cover it as just another type of technology with some unique caveats that need consideration.
Looking Ahead at Mercari
Q: What’s next for your team as AI adoption deepens across Mercari?
A: In addition to AI Security, we’re now building out an AI Governance function. With these two teams working together, a priority will be to utilize automated inventories of AI usage in our environment to build out better AI management systems. Similar to our security policies, our current goal is to make it as clear as possible for people to understand what they should / should not do. This will allow innovation within clear guardrails that push people to build more secure and AI risk-aware products.
Q: What advice would you give to other leaders just beginning to formalize their AI security programs?
A: Many of the fundamentals of AI security do not differ from what many leaders are already doing in their security programs. However, the increased interest and rapid uptake of agentic AI and LLMs require that we drive these fundamentals at scale. Don’t be afraid to run in parallel and use the AI-native wave as your own tailwind to push overall improvements in guardrails for security.
Q: Is there anything else you’d like to share about how Mercari thinks about secure AI enablement or the broader evolution of security in this space?
A: AI security keeps changing, and it's important to remain flexible and adaptive. We have a lot of initiatives in this area now, and while most are covering fundamentals, these might change dramatically as the technology evolves. Ultimately, it comes down to our mission to make building secure products and securing the enterprise as end-user friendly and easy as possible. For us, it's about taking this approach and talking openly and honestly with internal stakeholders.
Final Note
Q: What’s one assumption or buzzword around AI security you wish people would drop—something that gets in the way of real progress?
A: I think there is a lot of focus on securing 'around' AI, or integrating AI in security to defend against threat actors also using AI. While these are important topics too, I think there could be more discussion on ‘Secure AI Products’ to help drive progress in building security into AI solutions themselves, so that we have fewer ‘non-default’ defenses to build around the technology itself. So I don’t think there are buzzwords we should drop, but I think ‘Secure AI Product’ is one we should try and trend more.



