Layer Seven Security

Cybersecurity Insurance: Is it Worth the Cost?

According to the most recent annual Cost of Cyber Crime Study by the Ponemon Institute, the average cost of detecting and recovering from cyber crime for organizations in the United States is $5.4 million. Median costs have risen by almost 50 percent since the inaugural study in 2010. The finding masks the enormous variation of data breach costs which can range from several hundred thousand to several hundred million dollars, depending on the severity of the breach. A growing number of insurance companies are offering cyber protection to enable organizations to manage such costs. This includes traditional carriers in centers such as London, New York, Zurich and elsewhere, as well as new entrants targeting the cybersecurity insurance market. Carriers in the latter category should be carefully veted since some new entrants have been known to offer fraudulent policies in order to exploit the growth in demand for cyber insurance.

Cybersecurity insurance has been commercially available since the late 1970s but was limited to banking and other financial services until 1999-2001.  It became more widespread after Y2K and 9/11. Premiums also increased after these events and carriers began to exclude cyber risks from general policies. More recently, the dramatic rise in the threat and incidence of data breaches has propelled cybersecurity into a boardroom issue and led to a growing interest in cyber policies from organizations looking to limit their exposure.

A 2011 study performed by PriceWaterhouseCoopers revealed that approximately 46% of companies possess insurance policies to protect against the theft or misuse of electronic data, consumer records, etc. However, this is contradicted by the findings of 2012 survey by Chubb Group of Insurance Companies which revealed that 65 percent of public companies forego cyber insurance. The confusion may be due to a general lack of awareness among survey responders of the exact nature of insurance coverage. Many responders appear to be under the impression that cyber risks are covered by general insurance policies even though this is no longer the norm.

The cybersecurity insurance industry is highly diverse with carriers employing a plurality of approaches. Some offer standardized insurance products with typically low coverage limits. Others provide customized policies tailored for the specific needs of each client. Furthermore, the industry is evolving rapidly to keep pace with evolving threats and trends in cybersecurity.

Policy premiums are driven primarily by industry factors. E-commerce companies performing online transactions while storing sensitive information such as credit card data are generally considered high risk and are therefore subject to higher premiums. Health institutions hosting data such as social security numbers and medical records are also deemed high risk.

Premiums typically range between $10,000 to $40,000 per $1 million and provide up to $50 million in coverage. However, most standard policies only provide coverage for specific third-party costs to cover losses incurred by a company’s customers or partners. This includes risks related to unauthorized access and the disclosure of private information, as well as so-called conduit injuries that cause harm to third party systems.

Polices that provide coverage for first-party areas such as crisis management, business interruption, intellectual property theft, extortion and e-vandalism carry far higher premiums and are therefore relatively rare. This limits the appeal of cybersecurity insurance and ensures organizations need to self-insure for such risks for the foreseeable future. The situation is unlikely to improve until actuarial data is more widely available and shared between carriers for cybersecurity risks. This may require the establishment of a federal reinsurance agency and legislative standards for cybersecurity.

Carriers are unlikely to offer full cover for all first and third party costs arising from security breaches. This is due to the moral hazard associated with such coverage. Organizations that completely transfer cyber risk have no incentive to invest in preventative and monitoring controls to manage security risks. However, most carriers have exclusions for breaches caused by negligence. Other exclusions include coverage for fines and penalties, often due to regulatory reasons.

Aside from industry considerations, other factors that drive premiums for cybersecurity insurance are risk management cultures and practices in organizations. Carriers often assess cybersecurity policies and procedures before deciding premiums. Organizations that adopt best practices or industry standards for system security are generally offered lower premiums than those that do not. Therefore, insurers work closely with clients during the underwriting process to measure the likelihood and impact of relevant cyber risks. This includes consideration for management controls. Carriers that decide not to assess the cybersecurity practices of prospective clients tend to compensate by including requirements for minimal acceptable standards within policies. These clauses ensure that carriers do not reimburse organizations that failed to follow generally-accepted standards for cybersecurity before a security breach. Cybersecurity standards for SAP systems are embodied in benchmarks that are aligned to security recommendations issued by SAP. This includes the SAP Cybersecurity Framework outlined in the white paper, Protecting SAP Systems from Cyber Attack.

Cybersecurity insurance is most valuable for organizations with mature cyber risk cultures including effective standards and procedures for preventing, detecting and responding to cyber attacks. It enables such organizations to transfer the risk of specific costs arising from security breaches that are more cost-effectively covered by third-party coverage rather than self-insurance. Cybersecurity insurance is not a viable option for companies with weak risk management practices. Even if carriers were willing to insure such high-risk organizations, the premiums are likely to outweigh the cost of self-insurance. Furthermore, the likelihood that organizations would be able to collect upon such policies is low.

Five Reasons You Do Not Require Third Party Security Solutions for SAP Systems

You’ve read the data sheet. You’ve listened to the sales spin. You’ve even seen the demo. But before you fire off the PO, ask yourself one question: Is there an alternative?

In recent years, there have emerged a wide number of third party security tools for SAP systems. Such tools perform vulnerability checks for SAP systems and enable customers to detect and remove security weaknesses primarily within the NetWeaver application server layer. Most, if not all, are capable of reviewing areas such as default ICF services, security-relevant profile parameters, password policies, RFC trust relationships and destinations with stored logon credentials.

The need to secure and continuously monitor such areas for changes that expose SAP systems to cyber threats is clear and well-documented. However, the real question is do organisations really need such solutions? In 2012, the answer was a resounding yes. In 2013, the argument for such solutions began to waiver and was, at best, an unsure yes with many caveats. By 2014, the case for licensing third party tools has virtually disappeared. There are convincing reasons to believe that such tools no longer offer the most effective and cost-efficient solution to the security needs of SAP customers.

The trigger for this change has been the rapid evolution of standard SAP components capable of detecting misconfigurations that lead to potential security risks. The most prominent of these components is Configuration Validation, packaged in SAP Solution Manager 7.0 and above and delivered to SAP customers with standard license agreements. Configuration Validation continuously monitors critical security settings within SAP systems and automatically generates alerts for changes that may expose systems to cyber attack. Since third party scanners are typically priced based on number of target IPs, Configuration Validation can directly save customers hundreds of thousands of dollars per year in large landscapes. The standard Solution Manager setup process will meet most of the prerequisites for using the component. For customers that choose to engage professional services to enable and configure security monitoring using Solution Manager, the cost of such one-off services is far less than the annual licenses and maintenance fees for third party tools.

The second reason for the decline in the appeal of non-SAP delivered security solutions is a lack of support for custom security checks. Most checks are hard-coded, meaning customers are unable to modify validation rules to match their specific security policies. In reality, it is impossible to apply a vanilla security standard to all SAP systems. Configuration standards can differ by the environment, the applications supported by the target systems, whether the systems are internal or external facing and a variety of other factors. Therefore, it is critical to leverage a security tool capable of supporting multiple security policies. This requirement is currently only fully met by Configuration Validation.

The third reason is security alerting. While some third party solutions support automated scheduled checks, none can match native capabilities in Solution Manager capable of the near-instant alerting through channels such as email and SMS.

The fourth and fifth reasons are shortcomings in reporting and product support when compared to the powerful analytical capabilities available through SAP Business Warehouse integrated within Solution Manager and the reach of SAP Active Global Support.

More information is available in the Solutions section including a short introductory video and a detailed Solution Brief that summarizes the benefits of Configuration Validation and professional services delivered by Layer Seven to enable the solution in your landscape. To schedule a demo, contact us at info@layersevensecurity.com.

A First Look at the U.S Data Security and Breach Notification Act

On January 30, members of the U.S Senate and House of Representatives introduced a new bill intended to enforce federal standards for securing personal information and notifying consumers in the event of a data breach. Sponsored by leaders of the Senate Commerce, Science and Transportation Committee, the Security and Breach Notification Act of 2014 would require the Federal Trade Commission (FTC) to develop and enforce nationwide security standards for companies that store the personal and financial information of consumers. According to Committee Chairman Jay Rockefeller, “The recent string of massive data breaches proves companies need to do more to protect their customers. They should be fighting back against hackers who will do whatever it takes to exploit consumer information.”

If enacted, the measures introduced by the Bill would direct the FTC to develop robust information security measures to protect sensitive data from unauthorised access and exfiltration. The FTC would also be empowered to standardize breach notification requirements across all states to ensure that companies need only comply with a single law. The law would be enforced jointly by the FTC and state attorneys. Civil penalties for corporations and criminal penalties for corporate personnel would be imposed for violations of the law. The latter would include imprisonment for up to five years. Unlike HIPAA and SEC Disclosure Guidelines, the requirements of the Act are not limited to health organisations or publically listed companies. They are applicable equally to both private and public organisations that store customer information across all industries and sectors. They are also applicable to data entrusted to third party entities.

The proposed Federal data security and breach notification standards are firmly supported by the FTC. During a speech delivered to a privacy forum on December 12 2013, FTC Chairperson Edith Ramirez supported the role of the FTC as an enforcer of consumer data protection standards. The organisation has aggressively pursued companies that have suffered data breaches for alleged unfair and deceptive trade practices and imposed fines of up to $10 million. However, FTC rulings are often challenged on the grounds that the organisation lacks a clear legal mandate. The Data Security and Breach Notification Act would provide the mandate required by the FTC against clearly-defined standards for data protection.

This includes standards for identifying and removing vulnerabilities in systems that contain customer information and monitoring for breaches to such systems as required by sections 2 (C) and (D) of the Act. To learn about vulnerabilities effecting SAP systems and implementing logging and monitoring to detect potential breaches in SAP applications and components, download our white paper Protecting SAP Systems from Cyber Attack. The paper presents a framework of 20 controls across 5 objectives to safeguard information in SAP systems from internal and external threats.

Three Parallels between the POS Breach at Target Corp. and Vulnerabilities in ERP systems

The decision of the Office of the Comptroller at the U.S Department of Treasury to recognize cyber threats as one of the gravest risks faced by organisations today appears to be vindicated by the disclosure of an unprecedented data breach at Target Corporation shortly after the release of the Comptroller’s report. Specifics of the breach may not be known until the completion of an investigation currently underway by a forensics firm hired by Target to examine the incident. However, early reports suggest that the event may be one of the most devastating data breaches in recent years. According to a statement released by Target yesterday, approximately 40 million credit and debit card accounts may have been impacted between Nov. 27 and Dec. 15, 2013. The breach appears to have involved all of Target’s 1800 stores across the U.S. Based on the current average of $200 per compromised record, some estimates have placed the damage of the breach at $8 billion, almost three times the company’s net earnings in 2012.

The significance of the breach is related not only to the volume of records that have may have been compromised, but the type of data believed to have been extracted from Target. This includes sensitive track data stored within the magnetic stripe of payment cards. The card numbers, expiration dates and verification codes obtained through the track data could enable the perpetrators of the crime to create and sell counterfeit payment cards. There are three primary methods for compromising track data in retail scenarios. The first involves targeting switching and settlement systems. These systems are usually heavily fortified and traffic is commonly encrypted. The second entails the use of card skimmers. However, it is highly unlikely that skimmers could have been successfully installed across Target’s nationwide network of stores without detection. Therefore, the mostly likely method used by the attackers to obtain track data in such large volumes was through the compromise of the software that processes card swipes and PINs within Point-of-Sale (POS) systems at Target. Unfortunately, POS systems are a neglected area of information security, often regarded as little more than ‘dumb terminals’. This point of view could not be further from the truth. Today’s POS systems are sophisticated appliances that often run on Linux and Windows platforms. Furthermore, readily-available software development kits (SDK) for POS systems designed to enable developers to rapidly deploy applications for such systems could be abused to build dangerous forms of malware. This is the most probable cause of the breach at Target. Herein lays the first parallel between POS and ERP systems: although both process large quantities of sensitive information and lay at the core of system landscapes, security efforts are rarely equal to the strategic importance of such systems or aligned to the risks arising from their architecture.

The second parallel relates to the method used at Target to access and install the malware within the POS systems. This could only have been possible if the attackers were part of the software supply chain. Therefore, they mostly took advantage of some form of insider access. The counterpart in ERP systems is the often blind trust placed by organisations in third party developers, consultants and system administrators with broad access privileges.

The final parallel is the use of malware specifically aimed at business systems rather than individuals or consumers. Both POS and ERP systems are witnessing a surge in targeted malware. Systems such as SAP have always contended with this threat. One of the earliest known Trojans for SAP was discovered in 2003: KillSAP targeted SAP clients and, upon execution, would discover and replace SAPGUI and SAPLOGON files. Today’s malware is capable of far more destructive actions such as key logging, capturing screenshots, and attacking SAP servers through instructions received from remote command and control servers. The recently discovered Carberp-based Trojan is an example of such a threat. You can learn more about the risks posed by this Trojan at the Microsoft Malware Protection Center.

New malware variant suggests cybercriminals are targeting SAP systems

Security researchers at last week’s RSA Europe Conference in Amsterdam revealed the discovery of a new variant of a widespread Trojan program that has been modified to search for SAP systems. This form of reconnaissance is regarded by security experts as the preliminary phase of a planned attack against SAP systems orchestrated by cybercriminals. The malware targets configuration files within SAP client applications containing IP addresses and other sensitive information related to SAP servers and can also be used to intercept user passwords. Read More

The program is adapted from ibank, a Trojan that is most well-known for targeting online banking systems. Ibank is one of the most prevalent Trojans used in financial attacks, based on number of infected systems. It is often deployed together with the Zeus Trojan to harvest system credentials and is assigned a variety of names including Trojan.PWS.Ibank, Backdoor.Win32.Shiz, Trojan-Spy.Win32.Shiz and Backdoor.Rohimafo. Once installed, the program operates within whitelisted services such as svchost.exe and services.exe and is therefore difficult to detect. It also blocks well-known anti-virus programs. Ibank installs a backdoor on infected systems, enabling remote control of infected hosts. It also provides spying functions and the ability to filter or modify network traffic and change routing tables.  The program uses a wide number of APIs to log keystrokes, capture logon credentials, identify, copy and export files and certificates, and perform other malicious activities.

SAP customers are strongly advised to secure SAP installations against the threat of such an attack. Layer Seven Security use SAP-certified software to identify and remove vulnerabilities that expose SAP systems to cyber-attack. This includes misconfigured clients, unencrypted interfaces, and remotely accessible components and services targeted by attackers. Contact Layer Seven Security to schedule a no-obligation proof-of-concept (PoC).  PoCs can be performed against up to three targets selected from a cross-section of SAP systems and environments. Read More

SAP HANA: The Challenges of In-Memory Computing

This article is an extract from the forthcoming white paper entitled Security in SAP HANA by Layer Seven Security. The paper is scheduled for release in November 2013. Please follow this link to download the publication.

According to research performed by the International Data Corporation (IDC), the volume of digital information in the world is doubling every two years. The digital universe is projected to reach 40,000 exabytes by 2020. This equates to 40 trillion gigabytes or 5200 gigabytes for every human being in the world in 2020. As much as 33 percent of this information is expected to contain analytic value. Presently, only half of one percent of available data is analyzed by organisations.

The extraction of business intelligence from the growing digital universe requires a new generation of technologies capable of analysing large volumes of data in a rapid and economic way.  Conventional approaches rely upon clusters of databases that that separate transactional and analytical processing and interact with records stored in secondary or persistent memory formats such as hard disks. Although such formats are non-volatile they create a relatively high level of latency since CPUs lose considerable amounts of time during I/O operations waiting for data from remote mechanical drives. Contemporary persistent databases use complex compression algorithms to maximise data in primary or working memory and reduce latency. Nonetheless, latency times can still range from several minutes to days in high-volume environments. Therefore, persistent databases fail to deliver the real-time analysis on big data demanded by organisations that are experiencing a significant growth in data, a rapidly changing competitive landscape or both.

In-memory databases promise the technological breakthrough to meet the demand for real-time analytics at reduced cost. They leverage faster primary memory formats such as flash and Random Access Memory (RAM) to deliver far superior performance. Primary memory can be read up to 10,000 times faster than secondary memory and generate near-zero latency. While in-memory technology is far from new, it has been made more accessible to organisations by the decline in memory prices, the widespread use of multi-core processors and 64-bit operating systems, and software innovations in database management systems.

The SAP HANA platform includes a database system that processes both OLAP and OLTP transactions completely in-memory. According to performance tests performed by SAP on a 100 TB data set compressed to 3.78 TB in a 16-node cluster of IBM X5 servers with 8 TB of combined RAM, response times vary from a fraction of a second for simple queries to almost 4 seconds for complex queries that span the entire data range. Such performance underlies the appeal and success of SAP HANA. Since its launch in 2010, SAP HANA has been deployed by 2200 organisations across 25 industries to become SAP’s fastest growing product release.

SAP HANA has emerged against a backdrop of rising concern over information security resulting from a series of successful, targeted and well-publicized data breaches. This anxiety has made information security a focal point for business leaders across all industry sectors. Databases are the vessels of business information and therefore, the most important component of the technology stack. Database security represents the last line of defense for enterprise data. It should comprise of a range of interdependent controls across the dual domains of prevention and detection.

The most advanced persistent databases are the product of almost thirty years of product evolution. As a result, today’s persistent databases include the complete suite of controls across both domains to present organisations with a high degree of protection against internal and external threats. In-memory databases are in comparison a nascent technology. Therefore, most do not as yet deliver the range of security countermeasures provided by conventional databases. This includes:

Label based access control;
Data redaction capabilities to protect the display of sensitive data at the application level;
Utilities to apply patches without shutting down databases; and
Policy management tools to detect database vulnerabilities or misconfigurations against generally-accepted security standards.

The performance edge enjoyed by in-memory database solutions should be weighed against the security disadvantages vis-a-vis persistent database systems. However, it should be noted that the disadvantages may be short-lived. Security in in-memory databases has advanced significantly over a relatively short period of time. The most recent release of SAP HANA (SPS 06), for example, introduced a number of security enhancements to SPS 05 released a mere seven months earlier. This includes support for a wider number of authentication schemes, the binding of internal IP addresses and ports to the localhost interface, a secure store for credentials required for outbound connections and more granular access control for database users.

The most crucial challenge to database security presented by the introduction of in-memory databases is not the absence of specific security features but architectural concerns. Server separation is a fundamental principle of information security enshrined in most control frameworks including, most notably, the Payment Card Industry Data Security Standard (PCI DSS). According to this principle, servers must be single purpose and therefore must not perform competing functions such as application and database services. Such functions should be performed by separate physical or virtual machines located in independent network zones due to differing security classifications that require unique host-level configuration settings for each component. This architecture also supports layered defense strategies designed to forestall intrusion attempts by increasing the number of obstacles between attackers and their targets. Implementation scenarios that include the use of in-memory databases such as SAP HANA as the technical infrastructure for native applications challenge the principle of server separation. In contrast to the conventional 3-tier architecture, this scenario involves leveraging application and Web servers built directly into SAP HANA XS (Extended Application Services). Unfortunately, there is no simple solution to the issue of server separation since the optimum levels of performance delivered by in-memory databases rely upon the sharing of hardware resources between application and database components.

Aside from such architectural concerns, the storage of large quantities of data in volatile memory may amplify the impact of RAM-based attacks. Although widely regarded as one of the most dangerous security threats, attacks such as RAM-scrapping are relatively rare but are becoming more prevalent since attackers are increasingly targeting volatile memory to circumvent encrypted data in persistent memory. Another reason that RAM-based attacks are growing in popularity is that they leave virtually no footprint and are therefore extremely difficult to detect. This relative anonymity makes RAM-based attacks the preferred weapon of advanced attackers motivated by commercial or international espionage.

This paper presents a security framework for SAP HANA SPS 06 across the areas of network and communication security, authentication and authorization, data encryption and auditing and logging. It also provides security-related recommendations for the SAP HANA appliance and SAP HANA One. Taken together, the recommendations in this paper should support the confidentiality, integrity and availability of data in the SAP HANA in-memory database.

Organisations are not effectively addressing IT security and compliance risks according to accounting professionals

The results of the 2013 Top Technology Initiatives Survey revealed that securing IT environments against cyber attack and managing IT risks and compliance are rated as two of the three greatest challenges in technology by accounting professionals in North America. The survey was performed jointly by the AICPA and CPA, the largest accounting organisations in the United States and Canada. The survey sampled approximately 2000 members from the public accounting, business and industry, consulting, government and not-for-profit sectors. Members of both the AICPA and CPA placed securing the IT environment as the second highest priority for organisations in the area of information technology. Managing IT risks and compliance was ranked third by AICPA members and fourth by CPA members.

U.S respondents expressed average confidence levels of just 51 percent in organisational initiatives designed to manage IT security and 47 percent in initiatives addressed at managing IT and compliance risks. Confidence levels have fallen drastically in 2013 due to the wave of recent well-publicized data breaches. In 2012, U.S confidence levels for securing IT environments and managing IT risk and compliance were 62 and 65 percent. However, according to the Chair of the AICPA’s Information Management and Technology Assurance (IMTA) Division, The decline in confidence levels may mean professionals are making more knowledgeable assessments of the ability of organizations to achieve technology goals. This more realistic assessment indicates that the goals may be more challenging than originally thought, and that organizations must have the focus, commitment and drive to achieve them.

Layer Seven Security assist organisations worldwide to identify and remove vulnerabilities that expose SAP systems to cyber attack and impact the ability to comply with the requirements of IT control frameworks. To learn how we can assist your organisation manage SAP risks and stay compliant, contact Layer Seven Security.

Introducing the ABAP Test Cockpit: A New Level of ABAP Quality Assurance

The ABAP Test Cockpit (ATC) is SAP’s new framework for Quality Assurance. It performs static and unit tests for custom ABAP programs and introduces Quality-Gates (Q-Gates) for transport requests.

ATC was unveiled at last year’s SAP TechEd. The entire session including a live demo can be viewed below. Following a successful pilot, it was released for NetWeaver 7.0 SP12 and NetWeaver AS ABAP 7.03 SP05 in September and October 2012, respectively. General guidelines for configuring and running ATC are available at the SAP Community Network for both developers and quality managers.

ATC integrates directly with the ABAP Workbench and is accessible through SE80, SE24, SE38, SE11 and other Workbench tools. The existing iteration of the tool focuses almost exclusively on performance checks for exceptions such as runtime errors. However, SAP has revealed plans to deliver a new Security Scan Solution (SLIN_SEC) as an add-on for the Extended Program Check (SLIN) in ATC. This will enable security vulnerability checks for custom code. The introduction of the Security Scan Solution should improve the general security of ABAP programs and lower the risk of code-level vulnerabilities in ABAP systems including insufficient authority checks and code injections arising from uncontrolled input. You can learn more about the solution at session SIS261 scheduled on October 24 during this year’s SAP TechEd.

The alternative to the SAP Security Scan Solution is Virtual Forge CodeProfiler. CodeProfiler also integrates with ATC and performs a patented static code analysis for any type of ABAP program. CodeProfiler provides comprehensive performance and quality testing and is SAP-certified for integration with SAP NetWeaver.

The Brand-New ABAP Test Cockpit – A New Level of ABAP Quality Assurance

The Brand-New ABAP Test Cockpit: A New Level of ABAP Quality Assurance

A Dangerous Flaw in the SAP User Information System (SUIM)

Customers that have yet to implement Security Note 1844202 released by SAP on June 10 should do so immediately. The Note deals with a vulnerability that could be exploited to bypass monitoring controls designed to detect users with privileged access, including the SAP_ALL profile. This profile can be used to provide users with almost all authorizations in SAP systems. The vulnerability arises from a flaw in the coding of the RSUSR002 report accessible through the SAP User Information System (SUIM) or transaction SA38. RSUSR002 is a standard built-in tool used by security administrators and auditors to analyse user authorizations. A side-effect of Note 694250 was the insertion of the following line into the algorithm for RSUSR002:

DELETE userlist WHERE bname = “”

As a result of the insertion, users assigned the name “” are excluded from the search results generated by RSUSR002. This could lead to a scenario in which users are assigned SAP_ALL or equivalent authorizations without detection through regular monitoring protocols. However, the user “” would remain visible in UST04 and other user tables. The implementation of Note 1844202 will close the vulnerability in RSUSR002. Customers can also prevent the assignment of the username “” using customizing lists. For detailed instructions, refer to Note 1731549.

Exploring the SAP DIAG Protocol

One of the most memorable events at last year’s BruCON in Belgium was Martin Gallo’s expose of the SAP DIAG protocol. The session can be viewed in its entirety below. DIAG (Dynamic Information and Action Gateway) is a proprietary protocol supporting client-server communication and links the presentation (SAP GUI) and application (NetWeaver) layer in SAP systems. During the conference, Gallo presented the findings of his ground-breaking research that led directly to the identification of several denial-of-service and code injection vulnerabilities arising from security flaws in the DIAG protocol, patched by SAP in 2012.

Most researchers have focused on identifying weaknesses in the compression algorithm that scrambles payloads and other data transmitted through DIAG. The most notable research in this area was performed by Secaron in 2009. Secaron demonstrated that it is possible to intercept and decompress DIAG client-server requests including usernames and passwords. Subsequent research performed by SensePost revealed that the LZC and LZH compression methods used by SAP for DIAG are variants of the Lempel-Ziv algorithm. Furthermore, since both methods are also used in the open-source SAP MaxDB, the compression and decompression code-base is publically available. SensePost created a custom protocol analysis tool in Java using MaxDB code capable of compressing and decompressing DIAG messages. The tool could be used to intercept, read and modify client-server traffic in SAP.

Gallo’s research provides an unprecedented insight into the inner workings of the DIAG protocol. The vulnerabilities revealed by the research can be exploited through both client and server-side attacks. Deep inspection of DIAG packets can be performed through the SAP Dissection plug-in developed by Gallo for Wireshark, a popular network protocol analyzer. The research underscores the importance of strong countermeasures in SAP systems. This includes restricting access to the Dispatcher service responsible for managing user requests, SNC encryption for client-server communication, disabling SAP GUI shortcuts used by attackers to execute commands in target systems, effective patch management, and periodic vulnerability assessment and penetration testing.