Layer Seven Security

Cybersecurity Insurance: Is it Worth the Cost?

According to the most recent annual Cost of Cyber Crime Study by the Ponemon Institute, the average cost of detecting and recovering from cyber crime for organizations in the United States is $5.4 million. Median costs have risen by almost 50 percent since the inaugural study in 2010. The finding masks the enormous variation of data breach costs which can range from several hundred thousand to several hundred million dollars, depending on the severity of the breach. A growing number of insurance companies are offering cyber protection to enable organizations to manage such costs. This includes traditional carriers in centers such as London, New York, Zurich and elsewhere, as well as new entrants targeting the cybersecurity insurance market. Carriers in the latter category should be carefully veted since some new entrants have been known to offer fraudulent policies in order to exploit the growth in demand for cyber insurance.

Cybersecurity insurance has been commercially available since the late 1970s but was limited to banking and other financial services until 1999-2001.  It became more widespread after Y2K and 9/11. Premiums also increased after these events and carriers began to exclude cyber risks from general policies. More recently, the dramatic rise in the threat and incidence of data breaches has propelled cybersecurity into a boardroom issue and led to a growing interest in cyber policies from organizations looking to limit their exposure.

A 2011 study performed by PriceWaterhouseCoopers revealed that approximately 46% of companies possess insurance policies to protect against the theft or misuse of electronic data, consumer records, etc. However, this is contradicted by the findings of 2012 survey by Chubb Group of Insurance Companies which revealed that 65 percent of public companies forego cyber insurance. The confusion may be due to a general lack of awareness among survey responders of the exact nature of insurance coverage. Many responders appear to be under the impression that cyber risks are covered by general insurance policies even though this is no longer the norm.

The cybersecurity insurance industry is highly diverse with carriers employing a plurality of approaches. Some offer standardized insurance products with typically low coverage limits. Others provide customized policies tailored for the specific needs of each client. Furthermore, the industry is evolving rapidly to keep pace with evolving threats and trends in cybersecurity.

Policy premiums are driven primarily by industry factors. E-commerce companies performing online transactions while storing sensitive information such as credit card data are generally considered high risk and are therefore subject to higher premiums. Health institutions hosting data such as social security numbers and medical records are also deemed high risk.

Premiums typically range between $10,000 to $40,000 per $1 million and provide up to $50 million in coverage. However, most standard policies only provide coverage for specific third-party costs to cover losses incurred by a company’s customers or partners. This includes risks related to unauthorized access and the disclosure of private information, as well as so-called conduit injuries that cause harm to third party systems.

Polices that provide coverage for first-party areas such as crisis management, business interruption, intellectual property theft, extortion and e-vandalism carry far higher premiums and are therefore relatively rare. This limits the appeal of cybersecurity insurance and ensures organizations need to self-insure for such risks for the foreseeable future. The situation is unlikely to improve until actuarial data is more widely available and shared between carriers for cybersecurity risks. This may require the establishment of a federal reinsurance agency and legislative standards for cybersecurity.

Carriers are unlikely to offer full cover for all first and third party costs arising from security breaches. This is due to the moral hazard associated with such coverage. Organizations that completely transfer cyber risk have no incentive to invest in preventative and monitoring controls to manage security risks. However, most carriers have exclusions for breaches caused by negligence. Other exclusions include coverage for fines and penalties, often due to regulatory reasons.

Aside from industry considerations, other factors that drive premiums for cybersecurity insurance are risk management cultures and practices in organizations. Carriers often assess cybersecurity policies and procedures before deciding premiums. Organizations that adopt best practices or industry standards for system security are generally offered lower premiums than those that do not. Therefore, insurers work closely with clients during the underwriting process to measure the likelihood and impact of relevant cyber risks. This includes consideration for management controls. Carriers that decide not to assess the cybersecurity practices of prospective clients tend to compensate by including requirements for minimal acceptable standards within policies. These clauses ensure that carriers do not reimburse organizations that failed to follow generally-accepted standards for cybersecurity before a security breach. Cybersecurity standards for SAP systems are embodied in benchmarks that are aligned to security recommendations issued by SAP. This includes the SAP Cybersecurity Framework outlined in the white paper, Protecting SAP Systems from Cyber Attack.

Cybersecurity insurance is most valuable for organizations with mature cyber risk cultures including effective standards and procedures for preventing, detecting and responding to cyber attacks. It enables such organizations to transfer the risk of specific costs arising from security breaches that are more cost-effectively covered by third-party coverage rather than self-insurance. Cybersecurity insurance is not a viable option for companies with weak risk management practices. Even if carriers were willing to insure such high-risk organizations, the premiums are likely to outweigh the cost of self-insurance. Furthermore, the likelihood that organizations would be able to collect upon such policies is low.

Five Reasons You Do Not Require Third Party Security Solutions for SAP Systems

You’ve read the data sheet. You’ve listened to the sales spin. You’ve even seen the demo. But before you fire off the PO, ask yourself one question: Is there an alternative?

In recent years, there have emerged a wide number of third party security tools for SAP systems. Such tools perform vulnerability checks for SAP systems and enable customers to detect and remove security weaknesses primarily within the NetWeaver application server layer. Most, if not all, are capable of reviewing areas such as default ICF services, security-relevant profile parameters, password policies, RFC trust relationships and destinations with stored logon credentials.

The need to secure and continuously monitor such areas for changes that expose SAP systems to cyber threats is clear and well-documented. However, the real question is do organisations really need such solutions? In 2012, the answer was a resounding yes. In 2013, the argument for such solutions began to waiver and was, at best, an unsure yes with many caveats. By 2014, the case for licensing third party tools has virtually disappeared. There are convincing reasons to believe that such tools no longer offer the most effective and cost-efficient solution to the security needs of SAP customers.

The trigger for this change has been the rapid evolution of standard SAP components capable of detecting misconfigurations that lead to potential security risks. The most prominent of these components is Configuration Validation, packaged in SAP Solution Manager 7.0 and above and delivered to SAP customers with standard license agreements. Configuration Validation continuously monitors critical security settings within SAP systems and automatically generates alerts for changes that may expose systems to cyber attack. Since third party scanners are typically priced based on number of target IPs, Configuration Validation can directly save customers hundreds of thousands of dollars per year in large landscapes. The standard Solution Manager setup process will meet most of the prerequisites for using the component. For customers that choose to engage professional services to enable and configure security monitoring using Solution Manager, the cost of such one-off services is far less than the annual licenses and maintenance fees for third party tools.

The second reason for the decline in the appeal of non-SAP delivered security solutions is a lack of support for custom security checks. Most checks are hard-coded, meaning customers are unable to modify validation rules to match their specific security policies. In reality, it is impossible to apply a vanilla security standard to all SAP systems. Configuration standards can differ by the environment, the applications supported by the target systems, whether the systems are internal or external facing and a variety of other factors. Therefore, it is critical to leverage a security tool capable of supporting multiple security policies. This requirement is currently only fully met by Configuration Validation.

The third reason is security alerting. While some third party solutions support automated scheduled checks, none can match native capabilities in Solution Manager capable of the near-instant alerting through channels such as email and SMS.

The fourth and fifth reasons are shortcomings in reporting and product support when compared to the powerful analytical capabilities available through SAP Business Warehouse integrated within Solution Manager and the reach of SAP Active Global Support.

More information is available in the Solutions section including a short introductory video and a detailed Solution Brief that summarizes the benefits of Configuration Validation and professional services delivered by Layer Seven to enable the solution in your landscape. To schedule a demo, contact us at info@layersevensecurity.com.

M-Trends, Verizon DBIR & Symantec ISTR: Detecting and responding to cyber attacks has never been more important

The release of three of the most important annual threat intelligence reports earlier this month confirmed that 2013 was an explosive year for cybersecurity. All three reports point to rising incidences of cyber attack, increasing sophistication of attack vectors and a growing diversity of threat actors and targets.

The first of the reports is entitled M-Trends, compiled by the security forensics company Mandiant, now owned by FireEye. M-Trends is based on the analysis of incidence response data from organisations across 30 industries. While the analysis detected a slight improvement in the average number of days taken by organisations to detect a network breach, there was no discernable improvement in the ability of organisations to detect breaches without outside assistance. Only 33 percent of breaches are discovered by internal resources.

The analysis also revealed that cybercriminals are deploying a wider variety of attack methodologies against targets. Traditional approaches involve the detection and exploitation of vulnerabilities in Web applications which enable attackers to move laterally through connected systems after a successful compromise. According to M-Trends, attackers are shifting focus from Web applications to exploiting workstations and other systems infected with botnets and Trojans. These tools are designed to create backdoors for the installation and propagation of more powerful  forms of malware designed to seek out and extract sensitive data.

The report notes that sensitive data goes beyond proprietary intellectual property. State-sponsored attackers target a wide variety of information sources to understand how businesses work including emails, procedural and workflow documents, plans, budgets, organisational charts, and meeting agendas and minutes.

M-Trends concludes that the list of potential targets has increased, and the playing field has grown. Threat actors are not only interested in seizing the corporate crown jewels, but are also looking for ways to publicize their views, cause physical destruction, and influence decision makers.

The second report is also the most long-standing and well-known. The Verizon Data Breach Investigations Report (DBIR) is now in its eighth year and includes contributions from organisations such as the U.S Secret Service, US-CERT, Europol and the Council on Cyber Security. The 2014 DBIR analyzed over 1300 confirmed data breaches and 63,000 security incidents in 95 countries.

The highest number of security incidents analyzed by the DBIR affected organizations in the financial, retail and public sector. This is unsurprising since such organizations tend to store or process financial and other sensitive information. However, the DBIR did not observe any industry that was not impacted by security incidents that led to confirmed data losses. This underscore the DBIR finding that “everyone is vulnerable to some type of event. Even if you think your organization is at low risk for external attacks, there remains the possibility of insider misuse and errors that harm systems and expose data. To illustrate, 30% percent of security incidents impacting manufacturing companies can be classified as acts of cyber espionage. In comparison, less than 1 percent of incidents in public sector organisations are caused by cyber espionage. However, public sector organisations experience three times as many incidents of insider abuse as manufacturing companies.

The third and final threat intelligence report released in April was Symantec’s Internet Security Threat Report which revealed a 62 percent year-on-year increase in data breaches with 8 breaches exposing more than 10 million identities each. According to the report, the industries most at risk of a targeted attack are mining, government and manufacturing. The likelihood that organisations in such industries will experience an attack are 1 in 2.7, 1 in 3.1 and 1 in 3.2 respectively.

The report also revealed that there were more zero-day vulnerabilities in 2013 than other year on record. The number of zero-day vulnerabilities discovered last year were 61 percent higher than the year before and more than the previous two years combined.

The report recommends multiple and mutually-supportive defense-in-depth strategies to guard against single-point failures. It also recommends continuous monitoring and automatic alerting for intrusion attempts, as well as aggressive updating and patching. These recommendations are echoed by both M-Trends and the DBIR. According to the former, organisations require “visibility into their networks, endpoints and logs. Organisations also need actionable threat intelligence that identifies malicious activity faster.

Layer Seven Security enable SAP customers to meet this challenge by hardening every component of the SAP technology stack for defense in depth including underlying networks, databases and operating systems. We also configure comprehensive network, system, table and user logs to enable organisations to track, identify and respond to cyber attacks. Finally, we unlock standard, powerful security monitoring mechanisms in SAP Solution Manager to automatically detect and alert of potential malicious activity.

Trustwave Survey Reveals that IT Professionals are Feeling the Pressure of Board Level Scrutiny over Cyber Security

The rise in the rate and sophistication of cyber attacks has predictably fuelled the pressure on security resources. However, the precise complexion and source of the pressure was largely unknown until the recent release of the Trustwave Security Pressures study. The study examines the threats most concerning to security professionals and the preferred responses.

The results of the study are based on survey responses from over 800 decision makers in the US, UK, Canada, and Germany including CIOs, CISOs, and IT Directors / Managers. Almost 60 percent of respondents were IT/ Security Directors or higher and 75 percent represented organisations in North America.

Over 50 percent of IT professionals experienced more security-related pressures in 2013 than the year before and almost 60 percent expect the pressure to grow in 2014. The source of the greatest pressure is the threat of external attack through targeted malware. The threat of data loss arising from a successful network and system breach also ranked highly as a stressor. Only 5 percent of respondents believe their organisations are not susceptible to attack.

The study revealed that owners, boards of directors and C-level executives exert the most pressure on IT professionals. This reflects the high visibility and growing board-level presence of security concerns. Cyber risk is a common and recurring subject on board agendas. According to Trustwave, executives and board members are increasingly demanding a deeper explanation from IT professionals on security postures and often display a lack of confidence in IT risk management strategies. This wariness stems partly from the seeming inability of conventional security products and solutions to stem the tide of cyber attack and data loss.

The study also revealed that respondents struggle with the complexity of security solutions, shortages in dedicated resources and controlling capital and operational budgets.

The study recommends a number of specific actions to relieve the pressure. The first involves accepting the growing level of scrutiny from boards and other sources over security practices and managing security programs as strategic business initiatives with regular reporting to executive management. Other recommendations include augmenting in-house security expertise by partnering with outside security consultants, performing periodic risk assessments and penetration tests, focusing upon securing external-facing systems, controlling third party access and avoiding over-reliance upon security tools that provide a false sense of security.

Layer Seven’s Cybersecurity Framework delivers a comprehensive strategy to protect SAP systems from cyber attack and data breach. The framework provides a series of actionable recommendations to alleviate the growing pressure on IT professionals while avoiding the need for capital expenditure in security software. The framework equips security professionals with the insight and expertise required to safeguard mission-critical SAP resources from cyber risks. Learn more.

A First Look at the U.S Data Security and Breach Notification Act

On January 30, members of the U.S Senate and House of Representatives introduced a new bill intended to enforce federal standards for securing personal information and notifying consumers in the event of a data breach. Sponsored by leaders of the Senate Commerce, Science and Transportation Committee, the Security and Breach Notification Act of 2014 would require the Federal Trade Commission (FTC) to develop and enforce nationwide security standards for companies that store the personal and financial information of consumers. According to Committee Chairman Jay Rockefeller, “The recent string of massive data breaches proves companies need to do more to protect their customers. They should be fighting back against hackers who will do whatever it takes to exploit consumer information.”

If enacted, the measures introduced by the Bill would direct the FTC to develop robust information security measures to protect sensitive data from unauthorised access and exfiltration. The FTC would also be empowered to standardize breach notification requirements across all states to ensure that companies need only comply with a single law. The law would be enforced jointly by the FTC and state attorneys. Civil penalties for corporations and criminal penalties for corporate personnel would be imposed for violations of the law. The latter would include imprisonment for up to five years. Unlike HIPAA and SEC Disclosure Guidelines, the requirements of the Act are not limited to health organisations or publically listed companies. They are applicable equally to both private and public organisations that store customer information across all industries and sectors. They are also applicable to data entrusted to third party entities.

The proposed Federal data security and breach notification standards are firmly supported by the FTC. During a speech delivered to a privacy forum on December 12 2013, FTC Chairperson Edith Ramirez supported the role of the FTC as an enforcer of consumer data protection standards. The organisation has aggressively pursued companies that have suffered data breaches for alleged unfair and deceptive trade practices and imposed fines of up to $10 million. However, FTC rulings are often challenged on the grounds that the organisation lacks a clear legal mandate. The Data Security and Breach Notification Act would provide the mandate required by the FTC against clearly-defined standards for data protection.

This includes standards for identifying and removing vulnerabilities in systems that contain customer information and monitoring for breaches to such systems as required by sections 2 (C) and (D) of the Act. To learn about vulnerabilities effecting SAP systems and implementing logging and monitoring to detect potential breaches in SAP applications and components, download our white paper Protecting SAP Systems from Cyber Attack. The paper presents a framework of 20 controls across 5 objectives to safeguard information in SAP systems from internal and external threats.

Measuring the Risks of Cyber Attack

Most studies that examine the impact of cyber attack tend to focus on a combination of direct and indirect costs. Directs costs include forensic investigations, financial penalties, legal fees, hardware and software upgrades, etc. The approach is typified by the annual Cost of Data Breach Study performed by the Ponemon Institute, now in its eighth year. The most recent study examines the costs incurred by 277 companies in 16 industry sectors from 9 countries. According to the study, average data breach costs per organisation range between $1.1M – $5.4M for the selected countries. Estimates include losses related to reputational harm, lower sales, the loss of intellectual property, and other forms of indirect costs, which can account for as much as 68 percent of the total cost of a data breach.

Since indirect costs are far harder to accurately measure than direct costs and yet are proportionally more significant than direct costs, estimates for the average cost of a data breach have a high margin of error. Therefore, the actual costs incurred by organisations that suffer a data breach may be far higher or lower than the estimates provided by official studies.

A recent joint study performed by McKinsey and Company and the World Economic Forum presents a very different perspective on the risks of cyber attack. The results of the study are published in the report Risk and Responsibility in a Hyperconnected World, released earlier this week. It examines the global impact of cyber attacks and highlights risks often overlooked by conventional studies that focus on narrow definitions of direct and indirect costs. This includes opportunity risks, especially in the areas of cloud computing, data analytics and mobility. According to the study, such technological trends could create $10 trillion – $20 trillion in value for the global economy by 2020. Cyber risks lead to lower levels of trust and slower rates of adoption for cloud, big data and mobile technologies. The net result is that the risk of cyber attacks could lead to as much as $3 trillion in lost productivity and growth if it is not effectively managed before the end of the decade.

The study surveyed over 250 industry leaders across 7 sectors and 3 regions. 65 percent of respondents rated malicious external and internal attacks as the most likely risk to have a negative strategic impact upon their business. 69 percent believe that the sophistication or pace of attacks will continue to outperform the ability of institutions to defend such attacks, in spite of the fact that global spending on cyber security is expected to rise from $69 billion in 2013 to over $123 billion in 2020.

The study presents a proactive roadmap to build public and private sector capabilities designed to address cyber risks and accelerate innovation and growth. The roadmap includes prioritizing information assets based on business risks, scaling security efforts based on the importance of assets, integrating security into every area of technology from development to decommissioning, as well as business operations, deploying active defences to uncover attacks, continuous testing and security awareness training.

Three Parallels between the POS Breach at Target Corp. and Vulnerabilities in ERP systems

The decision of the Office of the Comptroller at the U.S Department of Treasury to recognize cyber threats as one of the gravest risks faced by organisations today appears to be vindicated by the disclosure of an unprecedented data breach at Target Corporation shortly after the release of the Comptroller’s report. Specifics of the breach may not be known until the completion of an investigation currently underway by a forensics firm hired by Target to examine the incident. However, early reports suggest that the event may be one of the most devastating data breaches in recent years. According to a statement released by Target yesterday, approximately 40 million credit and debit card accounts may have been impacted between Nov. 27 and Dec. 15, 2013. The breach appears to have involved all of Target’s 1800 stores across the U.S. Based on the current average of $200 per compromised record, some estimates have placed the damage of the breach at $8 billion, almost three times the company’s net earnings in 2012.

The significance of the breach is related not only to the volume of records that have may have been compromised, but the type of data believed to have been extracted from Target. This includes sensitive track data stored within the magnetic stripe of payment cards. The card numbers, expiration dates and verification codes obtained through the track data could enable the perpetrators of the crime to create and sell counterfeit payment cards. There are three primary methods for compromising track data in retail scenarios. The first involves targeting switching and settlement systems. These systems are usually heavily fortified and traffic is commonly encrypted. The second entails the use of card skimmers. However, it is highly unlikely that skimmers could have been successfully installed across Target’s nationwide network of stores without detection. Therefore, the mostly likely method used by the attackers to obtain track data in such large volumes was through the compromise of the software that processes card swipes and PINs within Point-of-Sale (POS) systems at Target. Unfortunately, POS systems are a neglected area of information security, often regarded as little more than ‘dumb terminals’. This point of view could not be further from the truth. Today’s POS systems are sophisticated appliances that often run on Linux and Windows platforms. Furthermore, readily-available software development kits (SDK) for POS systems designed to enable developers to rapidly deploy applications for such systems could be abused to build dangerous forms of malware. This is the most probable cause of the breach at Target. Herein lays the first parallel between POS and ERP systems: although both process large quantities of sensitive information and lay at the core of system landscapes, security efforts are rarely equal to the strategic importance of such systems or aligned to the risks arising from their architecture.

The second parallel relates to the method used at Target to access and install the malware within the POS systems. This could only have been possible if the attackers were part of the software supply chain. Therefore, they mostly took advantage of some form of insider access. The counterpart in ERP systems is the often blind trust placed by organisations in third party developers, consultants and system administrators with broad access privileges.

The final parallel is the use of malware specifically aimed at business systems rather than individuals or consumers. Both POS and ERP systems are witnessing a surge in targeted malware. Systems such as SAP have always contended with this threat. One of the earliest known Trojans for SAP was discovered in 2003: KillSAP targeted SAP clients and, upon execution, would discover and replace SAPGUI and SAPLOGON files. Today’s malware is capable of far more destructive actions such as key logging, capturing screenshots, and attacking SAP servers through instructions received from remote command and control servers. The recently discovered Carberp-based Trojan is an example of such a threat. You can learn more about the risks posed by this Trojan at the Microsoft Malware Protection Center.

Monitoring Access to Sensitive Data using SAP RAL

The disclosure of up to 200,000 classified documents belonging to the NSA by Edward Snowden in 2013, together with the release of over 750,000 U.S Army cables, reports and other sensitive information by Bradley Manning in 2010, has drawn attention to the need to control and monitor access to confidential data in corporate systems. For this reason, the general availability of the latest version of the SAP NetWeaver Application Server in May could not have been more well-timed.

NetWeaver AS ABAP 7.40 includes a new component known as Read Access Logging (RAL) to register and review user access to sensitive data. The momentum for RAL is driven not only by well-publicised information leakages but data protection requirements impacting industries such as e-commerce, healthcare and financial services. RAL is also in demand with organisations that have a relatively open authorization concept and therefore are more susceptible to data misuse. Aside from enabling organisations to verify user access to sensitive data and respond to potential abuses before they lead to the mass exfiltration of information, RAL acts as a deterrent for such abuse if users are aware that their actions are logged and monitored.

RAL supports calls though RFC, Dynpro, Web Dynpro and Web service channels. It is not enabled by default and therefore must be activated by selecting the Enable Read Access Logging in Client parameter in the Administration tab of the RAL Manager accessed via transaction SRALMANAGER. However, prior to enabling RAL, customers should follow several predefined configuration steps using the SAP_BC_RAL_CONFIGURATOR and SAP_BC_RAL_ADMIN_BIZ roles and associated authorization objects delivered by SAP. The first involves defining logging purposes to create logical groupings of log events based on the specific requirements of the organisation.  The second step is creating log domains to group related fields. For example, a domain for customer-specific information could be created to band together fields such as address, date-of-birth, SSN, etc.

Steps one and two establish the overarching structure for log information. The actual fields to be logged are identified during step three through recordings of sessions in supported user interfaces. Once identified, fields are assigned to log conditions and domains in step four. SAP will initiate RAL when the Enable Read Access Logging in Client parameter is selected which represents the final step of the configuration process.

Logs can be accessed through transaction SRALMONITOR or the Monitor tab of SRALMANAGER. Log entries include attributes such as time of the entry, user name, channel, software component, read status, client IP address and details of the relevant application server. Extended views provide more detail of log events than default views. The log monitor supports complex searches of events and filtering by multiple parameters.

RAL configuration settings can be exported to other systems through an integrated transport manager accessed through transaction SRAL_TRANS. Furthermore, logs can be archived using standard Archive Administrative functions in SAP NetWeaver via transaction SARA.

Although RAL is currently only available in NetWeaver AS ABAP 7.40, a release is planned for version 7.31 in the near future. Layer Seven Security can enable your organisation to leverage the full benefits of Read Access Logging and safeguard confidential information in SAP systems. To learn more, contact our SAP Security Architects at info@layersevensecurity.com or call 1-888-995-0993.

New malware variant suggests cybercriminals are targeting SAP systems

Security researchers at last week’s RSA Europe Conference in Amsterdam revealed the discovery of a new variant of a widespread Trojan program that has been modified to search for SAP systems. This form of reconnaissance is regarded by security experts as the preliminary phase of a planned attack against SAP systems orchestrated by cybercriminals. The malware targets configuration files within SAP client applications containing IP addresses and other sensitive information related to SAP servers and can also be used to intercept user passwords. Read More

The program is adapted from ibank, a Trojan that is most well-known for targeting online banking systems. Ibank is one of the most prevalent Trojans used in financial attacks, based on number of infected systems. It is often deployed together with the Zeus Trojan to harvest system credentials and is assigned a variety of names including Trojan.PWS.Ibank, Backdoor.Win32.Shiz, Trojan-Spy.Win32.Shiz and Backdoor.Rohimafo. Once installed, the program operates within whitelisted services such as svchost.exe and services.exe and is therefore difficult to detect. It also blocks well-known anti-virus programs. Ibank installs a backdoor on infected systems, enabling remote control of infected hosts. It also provides spying functions and the ability to filter or modify network traffic and change routing tables.  The program uses a wide number of APIs to log keystrokes, capture logon credentials, identify, copy and export files and certificates, and perform other malicious activities.

SAP customers are strongly advised to secure SAP installations against the threat of such an attack. Layer Seven Security use SAP-certified software to identify and remove vulnerabilities that expose SAP systems to cyber-attack. This includes misconfigured clients, unencrypted interfaces, and remotely accessible components and services targeted by attackers. Contact Layer Seven Security to schedule a no-obligation proof-of-concept (PoC).  PoCs can be performed against up to three targets selected from a cross-section of SAP systems and environments. Read More

SAP HANA: The Challenges of In-Memory Computing

This article is an extract from the forthcoming white paper entitled Security in SAP HANA by Layer Seven Security. The paper is scheduled for release in November 2013. Please follow this link to download the publication.

According to research performed by the International Data Corporation (IDC), the volume of digital information in the world is doubling every two years. The digital universe is projected to reach 40,000 exabytes by 2020. This equates to 40 trillion gigabytes or 5200 gigabytes for every human being in the world in 2020. As much as 33 percent of this information is expected to contain analytic value. Presently, only half of one percent of available data is analyzed by organisations.

The extraction of business intelligence from the growing digital universe requires a new generation of technologies capable of analysing large volumes of data in a rapid and economic way.  Conventional approaches rely upon clusters of databases that that separate transactional and analytical processing and interact with records stored in secondary or persistent memory formats such as hard disks. Although such formats are non-volatile they create a relatively high level of latency since CPUs lose considerable amounts of time during I/O operations waiting for data from remote mechanical drives. Contemporary persistent databases use complex compression algorithms to maximise data in primary or working memory and reduce latency. Nonetheless, latency times can still range from several minutes to days in high-volume environments. Therefore, persistent databases fail to deliver the real-time analysis on big data demanded by organisations that are experiencing a significant growth in data, a rapidly changing competitive landscape or both.

In-memory databases promise the technological breakthrough to meet the demand for real-time analytics at reduced cost. They leverage faster primary memory formats such as flash and Random Access Memory (RAM) to deliver far superior performance. Primary memory can be read up to 10,000 times faster than secondary memory and generate near-zero latency. While in-memory technology is far from new, it has been made more accessible to organisations by the decline in memory prices, the widespread use of multi-core processors and 64-bit operating systems, and software innovations in database management systems.

The SAP HANA platform includes a database system that processes both OLAP and OLTP transactions completely in-memory. According to performance tests performed by SAP on a 100 TB data set compressed to 3.78 TB in a 16-node cluster of IBM X5 servers with 8 TB of combined RAM, response times vary from a fraction of a second for simple queries to almost 4 seconds for complex queries that span the entire data range. Such performance underlies the appeal and success of SAP HANA. Since its launch in 2010, SAP HANA has been deployed by 2200 organisations across 25 industries to become SAP’s fastest growing product release.

SAP HANA has emerged against a backdrop of rising concern over information security resulting from a series of successful, targeted and well-publicized data breaches. This anxiety has made information security a focal point for business leaders across all industry sectors. Databases are the vessels of business information and therefore, the most important component of the technology stack. Database security represents the last line of defense for enterprise data. It should comprise of a range of interdependent controls across the dual domains of prevention and detection.

The most advanced persistent databases are the product of almost thirty years of product evolution. As a result, today’s persistent databases include the complete suite of controls across both domains to present organisations with a high degree of protection against internal and external threats. In-memory databases are in comparison a nascent technology. Therefore, most do not as yet deliver the range of security countermeasures provided by conventional databases. This includes:

Label based access control;
Data redaction capabilities to protect the display of sensitive data at the application level;
Utilities to apply patches without shutting down databases; and
Policy management tools to detect database vulnerabilities or misconfigurations against generally-accepted security standards.

The performance edge enjoyed by in-memory database solutions should be weighed against the security disadvantages vis-a-vis persistent database systems. However, it should be noted that the disadvantages may be short-lived. Security in in-memory databases has advanced significantly over a relatively short period of time. The most recent release of SAP HANA (SPS 06), for example, introduced a number of security enhancements to SPS 05 released a mere seven months earlier. This includes support for a wider number of authentication schemes, the binding of internal IP addresses and ports to the localhost interface, a secure store for credentials required for outbound connections and more granular access control for database users.

The most crucial challenge to database security presented by the introduction of in-memory databases is not the absence of specific security features but architectural concerns. Server separation is a fundamental principle of information security enshrined in most control frameworks including, most notably, the Payment Card Industry Data Security Standard (PCI DSS). According to this principle, servers must be single purpose and therefore must not perform competing functions such as application and database services. Such functions should be performed by separate physical or virtual machines located in independent network zones due to differing security classifications that require unique host-level configuration settings for each component. This architecture also supports layered defense strategies designed to forestall intrusion attempts by increasing the number of obstacles between attackers and their targets. Implementation scenarios that include the use of in-memory databases such as SAP HANA as the technical infrastructure for native applications challenge the principle of server separation. In contrast to the conventional 3-tier architecture, this scenario involves leveraging application and Web servers built directly into SAP HANA XS (Extended Application Services). Unfortunately, there is no simple solution to the issue of server separation since the optimum levels of performance delivered by in-memory databases rely upon the sharing of hardware resources between application and database components.

Aside from such architectural concerns, the storage of large quantities of data in volatile memory may amplify the impact of RAM-based attacks. Although widely regarded as one of the most dangerous security threats, attacks such as RAM-scrapping are relatively rare but are becoming more prevalent since attackers are increasingly targeting volatile memory to circumvent encrypted data in persistent memory. Another reason that RAM-based attacks are growing in popularity is that they leave virtually no footprint and are therefore extremely difficult to detect. This relative anonymity makes RAM-based attacks the preferred weapon of advanced attackers motivated by commercial or international espionage.

This paper presents a security framework for SAP HANA SPS 06 across the areas of network and communication security, authentication and authorization, data encryption and auditing and logging. It also provides security-related recommendations for the SAP HANA appliance and SAP HANA One. Taken together, the recommendations in this paper should support the confidentiality, integrity and availability of data in the SAP HANA in-memory database.