--- title: "ZAWYA-PRESSR: Cisco unveils key strategies for securing AI applications amidst rapid adoption in the Middle East" type: "News" locale: "zh-HK" url: "https://longbridge.com/zh-HK/news/271787454.md" description: "Cisco has outlined four key strategies for securing AI applications as their adoption accelerates in the Middle East. These strategies include open-source scanning to identify vulnerabilities, vulnerability testing for AI components, the use of AI firewalls to mitigate risks, and enhanced data loss prevention measures. Fady Younes, Managing Director for Cybersecurity at Cisco, emphasized the need for organizations to adapt traditional security practices to the AI lifecycle to manage emerging risks effectively. The focus is on protecting AI applications from development to production, ensuring digital trust across various sectors." datetime: "2026-01-07T12:18:06.000Z" locales: - [zh-CN](https://longbridge.com/zh-CN/news/271787454.md) - [en](https://longbridge.com/en/news/271787454.md) - [zh-HK](https://longbridge.com/zh-HK/news/271787454.md) --- > 支持的語言: [简体中文](https://longbridge.com/zh-CN/news/271787454.md) | [English](https://longbridge.com/en/news/271787454.md) # ZAWYA-PRESSR: Cisco unveils key strategies for securing AI applications amidst rapid adoption in the Middle East **Dubai, United Arab Emirates**\- Cisco highlights four priority focus areas organizations should consider to secure AI applications as they scale adoption. The guidance outlines how security teams can adapt proven application security practices to AI, helping organizations across the Middle East manage emerging risks and maintain digital trust. As AI adoption scales across the Middle East, including government, financial services, energy, and critical infrastructure, CISOs and IT leaders are under pressure to secure AI applications across the full lifecycle, from the data they rely on to the models they deploy. **Four focus areas for AI applications security****:** - Open-source scanningAI application development relies heavily on components such as open-source models, public datasets, and third-party libraries. These dependencies can include vulnerabilities or malicious insertions that compromise the entire system. - Vulnerability testingStatic testing for AI applications involves validating the components of an AI application, including binaries, datasets, and models, to identify vulnerabilities like backdoors or poisoned data. Dynamic testing evaluates how a model responds across various scenarios in production. Algorithmic red-teaming can simulate a diverse and extensive set of adversarial techniques without requiring manual testing. - Application firewallsThe emergence of generative AI applications has given rise to a new class of AI firewalls designed around the unique safety and security risks of LLMs. These solutions serve as model-agnostic guardrails, examining AI application traffic in transit to identify and prevent failures and enforce policies that mitigate threats such as PII leakage, prompt injection, and denial of service (DoS) attacks. - Data loss preventionThe rapid proliferation of AI and the dynamic nature of natural language content makes traditional DLP ineffective. Instead, DLP for AI applications examines inputs and outputs to combat sensitive data leakage. Input DLP can restrict file uploads, block copy-paste functionalities, or restrict access to unapproved AI tools. Output DLP uses guardrail filters to help ensure model responses do not contain personally identifiable information (PII), intellectual property, or other sensitive data. Fady Younes, Managing Director for Cybersecurity at Cisco Middle East, Africa, Türkiye, Romania and CIS, commented: “As AI adoption accelerates across the region organizations are moving quickly from pilots to production, and that shift changes the risk profile. Securing AI applications requires looking beyond traditional application controls to protect the full AI lifecycle, from the data and third-party components feeding models to how those models behave in real world use. By applying familiar security principles in AI specific ways, organizations in the Middle East can scale innovation with confidence while reducing risks such as prompt injection and sensitive data leakage.” Protecting AI applications from development to production Risk exists at virtually every point in the AI lifecycle, from sourcing supply chain components through development and deployment. The security measures highlighted above help mitigate different risk areas and each plays an important role in a comprehensive AI security strategy. Send us your press releases to pressrelease.zawya@lseg.com Disclaimer: The contents of this press release was provided from an external third party provider. This website is not responsible for, and does not control, such external content. This content is provided on an “as is” and “as available” basis and has not been edited in any way. Neither this website nor our affiliates guarantee the accuracy of or endorse the views or opinions expressed in this press release. The press release is provided for informational purposes only. The content does not provide tax, legal or investment advice or opinion regarding the suitability, value or profitability of any particular security, portfolio or investment strategy. Neither this website nor our affiliates shall be liable for any errors or inaccuracies in the content, or for any actions taken by you in reliance thereon. You expressly agree that your use of the information within this article is at your sole risk. To the fullest extent permitted by applicable law, this website, its parent company, its subsidiaries, its affiliates and the respective shareholders, directors, officers, employees, agents, advertisers, content providers and licensors will not be liable (jointly or severally) to you for any direct, indirect, consequential, special, incidental, punitive or exemplary damages, including without limitation, lost profits, lost savings and lost revenues, whether in negligence, tort, contract or any other theory of liability, even if the parties have been advised of the possibility or could have foreseen any such damages. ### 相關股票 - [Cisco (CSCO.US)](https://longbridge.com/zh-HK/quote/CSCO.US.md) ## 相關資訊與研究 - [Cisco Report: Strategic Wireless Investments are Driving Higher ROI for Enterprises in the AI Era | CSCO Stock News](https://longbridge.com/zh-HK/news/281545907.md) - [Insig AI Plans Growth Drive and Eyes Nasdaq Dual Listing](https://longbridge.com/zh-HK/news/281311983.md) - [Ex-OpenAI's Kass: AI Is Going to Make a Lot of Winners](https://longbridge.com/zh-HK/news/281017504.md) - [BullFrog AI Signs Major AI Drug Discovery Partnership](https://longbridge.com/zh-HK/news/281092205.md) - [K-Buddhism: "AI Sage", Can an AI awaken to its Original Nature? | AMZN Stock News](https://longbridge.com/zh-HK/news/281059138.md)