www.fgks.org   »   [go: up one dir, main page]

Firewall revolution or evolution?

Anthony James, vice president of products, Fortinet April 15, 2010

Firewalls are again becoming talk of the town. There are an enormous amount of opinions, including claims of a recent firewall revolution that have been proposed to completely change the firewall landscape. I will be the first to admit that the features and capabilities offered in today's firewall products are not the same as was offered in their original incarnation. But then again, traffic patterns and applications are not the same as they were when firewalls first hit the market. 

If we look at the some of the original firewall products (bypassing the whole proxy versus stateful approaches), most products focused on a simple, yet powerful proposition – allow or deny specific protocols (applications) and most often the policy was to deny all, allow few exceptions. The general intent is to insert a barrier at the network border fending off unnecessary and potentially dangerous application traffic. These firewall policies were based on a common way to identify the application - the layer 4 protocol identifier.

Today, applications have taken a dramatically different approach in terms of user interface and communication methods. It should not be a surprise that the majority of applications have moved from a proprietary, client-based executable user interface and unique communication protocol to a web-based interface / communication method. This “webification” of applications is due in part to the innovations in web technology and the ability to deliver rich user experiences that parallel previous “heavy” client-based GUI applications in a web-based environment. 

Given this change in application delivery, it is natural for firewalls to evolve and address the new challenge of application security. Obviously the same principles exist as with the original firewall concept – allow / deny applications based on a corporate security policy. However, if every application uses a common web communication method such as HTTP - port 80, how would the traditional firewall implement appropriate controls? If port 80 is “allowed” through the firewall, it would open access to a plethora of applications, some of which could be contrary to the overall security policy. 

This is where things get interesting regarding the so-called “firewall revolution” being claimed today, whereby applications are identified based on their content distinguishing, for example, between peer-to-peer (P2P) applications and hosted business applications. While this is a new way to identify applications, I don't agree it is a “revolution” because other security technologies have been doing this type of detection for quite a while, including intrusion prevention/detection systems (IPS/IDS). With IPS/IDS technologies, the ability to distinguish between multiple applications on a common protocol employs exactly the same principle as the proposed new firewall “revolution”. The new “revolution” isn't a revolution at all. It is nothing new, just a new way to use existing capabilities.

It seems disingenuous and just plain marketing hype to say that extending the application identification technology as part of a firewall policy is revolutionary. What is really happening is the evolution of the firewalls to meet the application evolution.

If there is anything revolutionary about firewalls today, it is the incorporation of content-based security technologies being integrated into the firewall, something that was previously thought to be impossible. The true revolution is in identifying threats within the application content, irrespective of the application, not just a new way to identify an application and allow or deny it. 

A security solution that harnesses the power of application control and content-based security enforcement is the true state of firewall technology innovation – especially if you agree that firewalls should be deployed as defense mechanisms to eliminate threats versus an “allow-or-deny” paradigm for application access.

 

How IT can win the security battle

Matthew Steele, director of strategic technology, Symantec March 12, 2010

Enterprise security is the classic “caught between a rock and a hard place” scenario. On one hand, the attacks are frequent and often quite effective. The losses mount quickly — $2.8 million annually for large enterprises.  Organizations face lost productivity, lost revenue, and a loss of customer trust.  

On the other hand, providing enterprise security is excruciatingly difficult  Even with massive staffs (230 or more for large enterprises), enterprises feel understaffed. And new data center initiatives – such as cloud computing and virtualization – make the job of providing enterprise security more difficult with each passing day. Despite these difficulties, the "Symantec 2010 State of Enterprise Security Report" shows organizations are holding their own and highlights simple steps IT managers can take to win the security battle.

Applied Research fielded the survey by telephone in January. The respondents came from three groups:

  1. Small enterprise  (500 – 999 employees)
  2. Mid-sized enterprises (1,000 – 4,999 employees)
  3. Large enterprises (5,000+ employees)

The 2,100 respondents came from a wide variety of industries and included a mix of CIOs, CISOs, and senior IT management in 27 countries. 

Enterprise security is IT's top concern

Forty-two percent of organizations ranked cybersecurity as their top risk, beating out such notables as traditional crime, natural disasters, and terrorism. On average, IT assigns 120 staffers to security and IT compliance. In large enterprises the number is even higher – 232.

Nearly all (94 percent) expect to implement changes to their cybersecurity efforts in 2010, with almost half (48 percent) forecasting major changes.

Enterprises are experiencing frequent attacks

Seventy-five percent of all enterprises have experienced cyberattacks in the past 12 months. Forty-one percent said these attacks were “somewhat/highly effective.” When asked about specific types of attacks, 57 percent reported somewhat to extremely fast growth, with “external malicious attacks” the fastest growing type.

Costs of cyberattacks are high

The study found all of the enterprises surveyed had experienced cyberlosses in 2009. The most common losses were:

  • Theft of customer personally-identifiable information
  • Downtime of environment
  • Theft of intellectual property
  • Theft of customer credit card information

These led to serious costs to 92 percent of the cases, most commonly:

  • Lost productivity
  • Lost revenue
  • Loss of customer trust

Enterprises reported an average combined cost of $2 million annually. For large enterprises, the cost was especially high – almost $2.8 million annually.

Enterprise security is becoming more difficult

Organizations have their hands full with the high frequency of attacks and staggering losses. Unfortunately, data center realities are making it even harder for IT to secure the enterprise.

Enterprise security is understaffed. The most impacted areas are:

  1. Security systems management
  2. Data loss prevention
  3. Network security
  4. Endpoint security

These security staffing woes come just as IT is rolling out initiatives that make providing security more difficult:

  • Infrastructure-as-a-service
  • Platform-as-a-service
  • Server virtualization
  • Endpoint virtualization
  • Software-as-a-service

So, two of the hottest new technologies – cloud computing and virtualization – are also the technologies most apt to make security staff's jobs more difficult.

Finally, enterprises are buried with IT compliance efforts. The study found that enterprises are currently exploring a staggering 19 separate IT standards or frameworks and are actually currently using eight of them. The top frameworks/standards mentioned were:

  •   ISO
  •   HIPAA
  •   Sarbanes-Oxley
  •   CIS
  •   PCI DSS
  •   ITIL

Recommendations

Organizations need to protect their infrastructure by securing their endpoints, messaging and web environments. In addition, defending critical internal servers and implementing the ability to backup and recover data should be priorities. Organizations also need the visibility and security intelligence to respond to threats rapidly. 

IT administrators should protect information proactively by taking an information-centric approach to protect both information and interactions. Taking a content-aware approach to protecting information is key in knowing where sensitive information resides, who has access, and how it is coming in or leaving your organization. 

Organizations need to develop and enforce IT policies and automate their compliance processes. By prioritizing risks and defining policies that span across all locations, customers can enforce policies through built-in automation and workflow and not only identify threats but remediate incidents as they occur or anticipate them before they happen.

Finally, organizations need to manage systems by implementing secure operating environments, distributing and enforcing patch levels, automating processes to streamline efficiency, and monitoring and reporting on system status.

For more information on Symantec's 2010 State of Enterprise Security study, click this link to visit the Symantec online newsroom.

 

Why intrusion prevention systems fail to protect web applications

Ryan Barnett, director of application security, Breach Security February 26, 2010

There is overwhelming evidence in reports such as the SANS Top Cyber Security Risks and the Verizon Data Breach Investigation Report that web applications are the Achilles' heel of most networks and criminals know it.  In order to protect web applications, the network security paradigm has to shift from “Keep People Out” to “What Are They Doing?” and the IT infrastructure spending needs to follow suit. 

Organizations need to protect themselves from today's attacks which are occurring at the application layer.  Intrusion prevention systems (IPSs) are often deployed in an attempt to protect web applications; however they are lacking many key protection elements. Below are the top seven reasons why IPSs fail to protect web applications:

1. A jack of all trades is a master of none.

IPSs have a wide protocol focus and are not solely focused on HTTP. This results in a reduced amount of system resources and signatures being allocated to web application protection. Web application firewalls (WAFs), on the other hand, do not inspect other protocols and can apply all processing and inspection power only to HTTP/HTTPS traffic.

2. You can't see me (access to encrypted traffic).

You can't inspect what you can't see. Most commercial IPSs are not capable of decrypting SSL traffic, which leaves a blind-spot in your detection and a channel for attackers to interact with the web application. The ability to decrypt and inspect SSL traffic is standard for WAFs.

3. Can you speak HTTP? (Application layer logic understanding)

Since IPSs are not “native” HTTP speakers, they do not properly parse the layer 7 web data down into their individual components, such as request headers, cookies and parameter names and payloads. They typically treat the HTTP data as one large blob of text which contributes to the higher false positive and negative alert ratios. WAFs are able to interpret the web data in the same way as the destination web application which means that it is able to better understand the context and apply rules and signatures more accurately.

4. Application layer rules (negative security model)

IPSs are mainly signature-based security systems so the breadth and quality is paramount. Unfortunately, most IPS signatures are based on vulnerabilities in public software so they are not effective for custom-coded web applications. WAF rules should also be generic in nature and provide “attack payload detection” to detect any variant of an attack.

5. Application profiling (positive security model)

IPSs typically inspect each request on its own, without any type of correlation of previous traffic. Commercial WAFs have automated learning and profiling capabilities based on a statistical model of all traffic that create custom, positive security profiles for each web resource. This allows for an input validation policy that permits only acceptable data to pass through and blocks attacks that are missed by the negative security model.

6. Application performance monitoring (Anti-automation/denial-of-service (DoS) defenses)

Acceptable traffic velocity levels are not a “one-size-fits-all” setting. Most IPSs have some form of base-lining capability which monitors traffic flows and can flag significant deviations, but they are not granular enough to be applied to each individual application resource. Web application attacks such as DoS, Brute force and scraping have unique thresholds for each site. WAFs are able to monitor the request velocity levels and apply threshold restrictions per resource, and block when these settings are violated. Additionally, by monitoring application response times, true DoS conditions may be identified.

7. Inspecting outbound data (information leakages)

IPSs focus mainly on the inbound requests and pay little attention to the data leaving the web applications.  Attackers often use the data presented within web error messages to enumerate back-end database resources and fine tune their attacks. WAFs are able to inspect outbound response body payloads for typical database error messages and block it so that it is not provided to the client. In addition to error messages, WAFs are able to track the locations and amounts of sensitive data (such as credit card or Social Security numbers) and alert or block when there are changes.

Conclusion

Organizations need to change their approach to securing web applications by using products with specially designed features for protecting layer 7 traffic and data exchange. While IPSs serve an important role in preventing network-level attacks, they just can't perform at the top of the stack. WAFs are specialized products for detecting attacks against web applications in more depth than IPSs. The PCI Security Securitiy Council echoed this same sentiment it the Requirement 6.6 Application Reviews and Web Application Firewalls Clarified Supplemental Document, which lists many of the capabilities described here.



 

Is increased government regulation the answer to increased privacy protection?

Glen Kosaka, director of marketing, Trend Micro February 25, 2010

Data breaches involving privacy information continue to increase despite the costs, embarrassment and negative publicity associated with them. Common themes exist in these two recent breaches:

  • In May 2009, UC Berkeley's health services systems were breached, exposing the private information including Social Security numbers of 160,000 people.
  • In September 2009, the University of North Carolina's systems were breached, exposing 163,000 Social Security numbers of women taking part in medical research.

Although these two examples came from the health care segment in universities, privacy breaches are occurring with startling regularity across all industries and companies. So what is it going to take for companies to start taking these seriously and institute the proper level of security to prevent them? Is the answer more government regulation, or are enough companies already going through the normal processes to plan and deploy these protections?

I suspect that, for many companies, the cost of a data breach may be just another cost of doing business. For these companies, the costs of a breach, which studies have shown to be anywhere from $150 to $300 per record, are weighed against the costs of process change, education, security technology, and ongoing maintenance associated with reducing the risk of breach. Unless a senior executive intervenes and weighs in on the importance of brand protection and reputation, many companies choose to take a reactive rather than proactive approach.

However, for any individual whose privacy has been compromised, it is a major cost and hassle. It seems as if the pain of an individual or a group of victims is not enough to justify proper privacy protection by a company. This is one reason why there are many new government regulations being enacted to protect individual privacy, at both the federal and state level.

Regulations such as PCI, SB-1386, and HITECH affect many companies and industries, and are generally thought to be well constructed for protecting individual privacy. But what about nonregulated industries where neither of these regulations apply? If there are significant privacy records to protect in any industry, it is only a matter of time before the government will step in with regulation if companies in that industry fail to adequately address privacy issues. The government doesn't care if you lose critical manufacturing plans to a competitor, or other intellectual property. They don't care if all your customer contacts are stolen and sold to your competitor. Protecting this type of information is something that a company should already be doing in order to protect their competitiveness. But the government does care if individual privacy is at risk, and will step in if companies don't step up.

Why is implementing a solution to prevent data loss so difficult? To be fair, this is not a problem that can be solved by one single technology. Addressing this problem often involves understanding how data is handled and transmitted, where data is stored, and educating employees about company policies. IT and security professionals often get overwhelmed by all the different potential leak channels and threats, and don't know where to start.

A layered defense is required for a comprehensive solution to data breach protection. Protecting against external threats such as data-stealing malware, hackers, and web application attacks is the first line of defense. This needs to be augmented by data loss prevention solutions which include both content monitoring and filtering as well as encryption capabilities. The ‘insider threat,' which arises from employees, contractors and partners, is often the source of the most damaging breaches, either due to carelessness or malicious criminal activity.


 

Security vision for the smarter planet

Anne Lescher, product marketing manager, IBM Security Solutions February 24, 2010

Today's environment

Over a year ago, IBM began a global conversation about how the planet is becoming smarter with an increasingly instrumented, interconnected and intelligent infrastructure. There is an explosive growth of data that is collected about virtually every aspect of our lives that we can connect and share across billions of devices with built-in intelligence. Our ability to use this data to visualize, control and automate what happens in our environment influences every aspect of our lives from financial transactions, to healthcare, retail, transportation, communications, government and utilities.

Security remains a prerequisite for doing business in today's dynamically evolving enterprise – managing risk in this environment is a challenge with constantly changing vulnerabilities, both internal and external, as the threats become more sophisticated. Failure to protect our systems results in lost business and impacts brand trust and business advantage.

Security for the smarter planet

Security can allow organizations to take risk and be an enabler of innovative change for them. Let's look at how security can help manage complexity, reduce costs and assure compliance.

  • Identity is a focal point in today's global economy where trustworthy credentials are required for any interaction or transaction. The process of granting and maintaining digital identities, granting access to applications and information assets, and auditing user activities is a difficult and expensive one. Organizations spend an average of two weeks to set up new users on IT systems and typically up to 40 percent of existing user accounts are invalid. Identity and access management solutions can lower costs and mitigate the risks associated with managing user access to corporate resources; for example, reducing user provisioning time from days/weeks to minutes.
  • Security at the application layer is an important area to watch, with industry analysts estimating that 80 percent of organizations will experience an application security incident by 2010. The average application deployed has dozens, sometimes hundreds, of defects, and about 74 percent of application vulnerabilities have no patches available today based on IBM X-Force research. Security should be an intrinsic aspect of business processes and operations, factored into the process from the initial security architecture to application development and implementation. Look at the ROI... Businesses today spend 80 percent of development costs identifying and correcting defects, costing $25 during coding phase versus $16,000 in post-production. Secure design can improve product quality and reduce costs in the long run.
  • To cut costs and operate more efficiently, our customers tell us they want to adopt such technology paradigms as cloud and virtualization to provide dynamic operational support for peak capacity demands and data sharing. Unfortunately, security is often a roadblock. Effective data security and strong access controls can prevent security exposures when exploiting cloud technology. These and other security capabilities will only grow in importance as standards, such as PCI DSS, look at adding a requirements section specific to virtualization and cloud.
  • The average company is subject to hundreds, often thousands of regulatory or industry specific compliance mandates, not to mention internal policies and audit standards. Trying to address this mix of requirements is overwhelming. Automation can help with compliance monitoring – effectively collecting and analyzing security information and events – management and reporting for data privacy laws and industry regulations.

Few would argue that IT security challenges are rising with an increase in sophisticated threats. Organizations will turn to any number of best practices for guidance, but the adherence to service management (ITSM) disciplines and the adoption of information technology infrastructure library (ITIL) services has proven to be the most effective. Industry surveys indicate that 87 percent of breaches were considered avoidable through reasonable (foundational) controls and the highest performers in the area of security management were those that adopted ITIL as their best practice approach. When creating a security “foundation”, it is important that organizations take a business-driven perspective – ensuring they align IT with their business objectives, allocate risk across security domains, and enforce the appropriate security level in each area in light of business opportunities, threats, and vulnerabilities.

Brighter future

Technology has a huge potential to help manage risk while enabling innovation for business growth. Imagine a smarter planet where critical infrastructures are more secure, cities are safer, your identity and privacy are protected, and you have ability to use social networking sites and new, cool apps on smart devices without worrying about the risks. A smarter and more secure planet is in everyone's interest, and the time for us to act is now!


 

ITIL + IT-GRC = mass * velocity

Steve Schlarman, eGRC solutions manager, Archer Technologies February 18, 2010

In the world of acronyms, information technologists seem to lag behind only government agencies in their ability to create jargon and abbreviations of cryptic concepts. IT-GRC is one member of the IT lingo club. The Information Technology Infrastructure Library, or ITIL, is a fellow acronym gaining more acceptance and popularity within the IT industry. ITIL provides a common framework to formalize a service-oriented management approach within IT and improve interaction between IT and the business.

Both IT-GRC and ITIL converge on one straightforward, yet complex, objective: Build an IT organization that is governed intelligently, meets customer and business requirements, and delivers a high level of service while minimizing risks and maximizing efficiencies and effectiveness. For many risk, audit and security professionals, ITIL remains an "IT Operations only" approach, but there are many ways to utilize ITIL to complement IT-GRC efforts.

One way to leverage the harmony between ITIL and IT-GRC is to look at governance, risk and compliance within IT as another IT service offered to the business. To this end, ITIL can be used as a guideline for implementing the IT-GRC program. The ITIL approach is defined by five stages that follow an IT service from inception through retirement:

1. Service Strategy: Defining the overall goals, objectives and business functions within the service.

2. Service Design: Designing the service components and processes within the overall service.

3. Service Transition: Managing the rollout process and change management to the service and process.

4. Service Operation: Executing the daily tasks and activities within the service.

5. Continual Service Improvement: Quality assurance and monitoring of the service for improvement and optimization.

IT-GRC can use this framework to guide the overall program development and management. While the entire sequence is beyond the scope of this blog post, the concepts within ITIL can be applied to IT-GRC, and IT-GRC program managers can leverage the approaches used within ITIL to build out the program.

With this in mind, I can explain my title for this article: ITIL + IT-GRC = Mass * Velocity. For those of you who can dust off physics equations stuck in your head from high school, you might recognize the Mass * Velocity portion. This is the equation to calculate Momentum (p=mv). My point is that for those organizations that are looking to implement IT-GRC programs and have already begun looking at ITIL to guide IT service development, there are some advantageous resources in your organization — namely those ITIL savvy operations people who may be able to help move the IT-GRC program along.

As you look to mature and formalize the risk and compliance program, a few well-aimed discussions may help to guide the IT-GRC processes.

 Besides, any conversations between the IT-GRC side of the house and the operations side are just gravy. Since there is no equation for gravy (except in some Southern states), you can use these conversations to pick up momentum toward meeting your IT-GRC goals.

If you're interested in a little more discussion on this topic, I invite you to read an article I recently published with the EDPACS Journal, titled "What ITIL Can Teach IT-GRC." (EDPACS: The EDP Audit, Control and Security Newsletter, Volume 40, Issue 2). And if you'd like to learn more about Archer's approach to IT-GRC, please download the Archer IT-GRC data sheet from our website.
 

2010 SC Awards Announces New Blogger Award Categories - Nominate Today!

Illena Armstrong February 17, 2010

With the SC Magazine Awards Blog, we're attempting to add thought-provoking subject matter from industry leaders on a wide variety of topics and issues facing the security industry today. Hopefully, these blog posts are providing you additional value and insight into the state of the industry and a forward-looking forum on the challenges we are likely to see in the future.

We're using a blog format, because as the threats and issues develop today, we believe that blogging is a key manner of communication and conversation that can quickly generate discussion among peers and other industry experts.

Many of our contributors and other security pundits blog on a regular basis in their own forums, offering their take on today's evolving threat landscape. This year, the SC Awards would also like to begin to recognize these pioneers in their own evangelist efforts.

I'm happy to announce that, for the first time ever, as part of the 2010 SC Awards program, we hope to formally recognize the security industries most popular, poignant and prolific security bloggers.

Contest details: We will be recognizing three blogging categories — Most Popular Security Blogger, Best Corporate Security Blog and "Five to Follow," a collection of the top five security pundits on Twitter.

How to nominate: You may email nominations directly to scawardsbloggers@yourtechpr.com. Please specify what category you are nominating for and the URL of the blog or Twitter handle you would like to nominate.Nominations will be accepted until Monday (Feb. 22), at which point the top finalists will be posted on www.scmagazineus.com for a direct vote from our readers.

The SC Awards gala dinner and presentation* will take place on March 2, 2010 at the Intercontinental San Francisco. The SC Awards gala will be a night filled with the excitement of the SC Award winners, dinner and entertainment, along with top corporate IT professionals attending. This offers an invaluable opportunity to network with colleagues and peers, and to cultivate new contacts.

Thank you in advance for helping us to recognize our fellow online security gurus.

Sincerely,

Illena Armstrong

Editor-in-chief

SC Magazine

*Dinner information

Dinner reservations for the awards can be placed online at http://www.scmagazineus.com/scawards2010-finalists/section/1309/ . Don't forget to place your reservation promptly as places are limited and bookings will be accepted on a first-come, first-served basis.


 

Finding solutions for the problem of consumerization.

Michael Angelo, security architect, NetIQ February 17, 2010

While it may not yet have reached fever pitch, there is a steady and growing awareness of the risks of a new trend in business computing: consumerization.

Consumerization has evolved into two different aspects – the first being the use of personal equipment for work purposes, and the second is the use of consumer services for work.

Both can potentially create issues in the corporate environment, though I will focus on only one side of consumerization: the use of personal equipment in the corporate environment, and the potential security issues this practice raises. Consumerization raises three primary questions:

1.       What does acceptable use mean with respect to corporate policies and how does one enforce those policies?

2.       What is the impact of privacy laws from the perspective of the corporate customer the individual device owner, and the corporation?

3.       What are the implications of employee attrition with respect to the security of corporate information residing on personal computers?

Acceptable Use Policies have traditionally precluded the use of corporate equipment for non-business activities.  They also often explicitly prohibited activities such as playing of games, use of file sharing technology, or personal web surfing. There were also often additional policies that defined what software could be installed as well as acceptable email content. Most significantly, these policies were ultimately enforced by the corporation's ability to access the machines for review and enforcement based on corporate ownership of equipment. However, when employees use their own computers, such access is far less clear cut. As a result, acceptable use policies as they are currently understood must be approached differently.

What to do? For now you might want to review the acceptable use policy and rethink several aspects of it. For example, the acceptable use policy might relax some of the rules such as non-work related (licensed programs) on the machine. The policy might also take a stronger stance on items such as file sharing (and especially bit torrents that can eat system bandwidth and or have questionable legal ramifications) or the use of third-party software in business due to licensing issues. 

Privacy Laws typically require that care be taken to not expose customer information and to report the leakage (potential and actual) of information to the customers. However, a non-corporate owned system will clearly introduce complexities when it comes to security measures and restrictions that would be different at home versus in the workplace. An example of this is filtering software, which might not be acceptable on a user's personal machine.

Two scenarios need to be considered when addressing consistency in security infrastructure:

1.       The employee goes home and gets an email from a company for a new widget. They click on the link, and get speared. While the corporate address book goes out, does it mean that the customer data was also exposed? 

2.       The employee is surfing the web at home and becomes the victim of a drive-by attack. Does the employee now need to report the incident to the corporate security team that must then notify customers of a potential breach? 

Not all issues will be as hard to resolve. For example, the issue of what to do in the event of a stolen computer can be mitigated by requiring employees to use encryption for corporate data. 

What to do? Look at your current security enforcement technology. Perhaps you could replace filtering software with web page analysis software that pre-scans web pages for malware. Another solution might be to provide virtual machine tools for surfing so that the user's web environment is sandboxed.

Employee Attrition is a potential issue when an employee with a personal notebook computer containing 500 gigs of storage resigns. The notebook contains corporate and employee purchased software as well as corporate data (email, memos, customer data, etc.) and employee data (pictures, movies, personal email, letters, etc.).

The company does not want to (or can't) leave the employee with corporate materials (customer information, corporate secrets, software licenses, etc). In addition, asking the employee to delete the corporate materials is not realistic. On the other hand, the company might not be able to remove its intellectual property with a clean restore or system wipe.

What to do? There are many solutions to this problem. One simple solution may be to provide an external drive for users to boot from when at work or doing work activities, while another solution might be to enable virtual compartmentalization. 

In the end, none of these issues are real show stoppers, but as always in the realm of security, the key is planning ahead to avoid the worst of the problems, and being pragmatic about solving the ones that you didn't see coming.


 

Change is constant - so is compliance

Jonathan Sander, IAM/Security analyst, Quest Software February 16, 2010

Compliance Facts:
  • More than 40,000 rules (i.e. national laws) were passed by the U.S. government in the last decade.
  • The Weidenbaum Center at Washington University in St. Louis and the Mercatus Center at George Mason University in Virginia jointly estimate that agencies spent $49.1 billion to administer and police the 2008 regulatory enterprise.
With figures like that, today's reality is that IT organizations have been forced to sink or swim in keeping up with compliance and security requirements. They've got to do it faster, with less staff and limited to nonexistent budget. Yet, auditing Microsoft-based infrastructures for compliance with internal policies and external regulations can be a tedious, repetitive, time-consuming process fraught with risk. Not to be ignored: Security breaches, malware, mistakes and leaks of sensitive enterprise data are serious threats to the organization, with internal security threats as perilous as external ones.

Failure is not an option as lapses in compliance and breaches can lead to loss of IP, system downtime, frustrated end users, lost productivity, fines and negative publicity. Whether it is monitoring change events affecting Active Directory, Exchange or Windows File Servers, reporting, though required, distracts administrators from working on other projects.

These additional pressures from external regulations, coupled with internal fiscal constraints mean that the IT organizations simply have no alternative but to work smarter by using the same budget dollar for compliance and secure operations.

How do we do it?

Windows does provide native security event logs which are managed on a per-server basis and contain events generated by every subsystem. That said, the problem with the native logs is that with no centralized view, IT managers need to scour all event logs on each server and each subsystem.

The fact is that IT organizations require a simple solution that:

  •  Provides the who, what, when and where of all changes, including details on previous and new change values, with the ability to add comments on Why a specific change was made to fulfill audit requirements
  •  Monitors and tracks all change events in real time across the network, eliminating the need for multiple solutions
  •  Reduces risk by providing regulation-specific reporting, aligning with operational best practices, and preventing leaks of sensitive data
  •  Facilitates faster audits with less work on IT by generating predefined, custom and ad-hoc reports to meet the needs of various stakeholders, including auditors
  • Controls costs, enabling IT organizations to use the same products to address compliance and security requirements and improve operational activities
  • Enables faster and smarter responses to threats or unusual activities as they occur
It is time to reduce risk and take control of Windows auditing, compliance and security.
 

Peeling the onion layer on the web security inertia

Mandeep Khera, CMO, Cenzic February 11, 2010

An onslaught of cyberattacks, including some high profile breaches at Heartland Payment Systems, government agencies, Facebook, Twitter, RockYou, and, most recently, at Google, continues. Websites (web applications) across the globe remain vulnerable and ripe for hackers to exploit. Although good progress has been made in the last 12 months by some sectors, we have a long way to go when it comes to securing websites with a methodical and disciplined approach.

I wonder about the root cause of this inertia. If you knew that your house is likely to get attacked, wouldn't you try to fix all the doors and windows, get locks and alarms, and take other precautions? So, why is it that in spite of some well publicized attacks and regulations, there's not a massive adoption of a process and solutions to secure websites?

After talking to hundreds of companies, government agencies, and industry luminaries over the past few years, I have narrowed down the reasons behind this phenomenon to a few myths and real inhibitors, which I explained below.

Top 5 Myths around Web application security

It turns out that many IT professionals and business line managers still believe that their existing security measures are enough to protect their websites. Here are some of the common myths.

·         I have SSL so my Web sites are secure: Well, Secure Socket Layer (SSL) has its place in helping provide some protection to the consumers while they are conducting transactions online. However, it does nothing to protect hackers from hacking into websites. So, the SSL lock symbols on most of the sites can be misleading.

·         I have never been hacked so I am fine: Gone are the days when hackers used to hack to gain fame. Now, most web hacking is done by organized criminals and in some cases by government sponsored organizations. These guys don't want you to know that you are being hacked. 

·         I can test my web application once a year: Every month there are 400+ new application related vulnerabilities and hackers know about them. Also, every time you make any change to a web application, you have to make sure that there are no new vulnerabilities.

·         Application Security is painful to implement: Although it's more difficult to secure web applications than the network layer and desktops, there are many easy solutions to get your process jump started. Like all initiatives, once you get going, the road gets less bumpy.  

·         I am PCI compliant: You have to protect your web applications to secure your most important asset – customer information. If your applications are secure, you'll pass the audit and comply with regulations. The reverse is not necessarily true.

Inhibitors

·         Budget: Many companies still haven't set aside a budget for application security. A lot of times application security is part of a bigger bucket of security budget. If too much money gets spent on network security, identify management, data leakage prevention, etc. sometimes there's not enough left for applications.

·         Lack of education: Many IT people, especially in the upper management are not fully aware of the implications of securing their web applications.

·         Lack of expertise: Even when an organization is committed to implementing an application security program, they might not have the right expertise to create the right processes.

·         Unclear standards: Regulatory standards can help organizations in focusing on the right priorities and in obtaining a budget. Most of the current regulations are very broad for security without much clarity for application security.

·         Attacks are not publicized: In spite of continuous attacks at the web application layer, many of these attacks are not publicized or are not publicized with the application security breaches highlighted.

All indications are that cyberattacks at the web application layer will continue to rise in the coming months and years. With close to 80 percent of vulnerabilities in web applications and more than 75 percent of attacks happening through the websites, the question is not IF you will get attacked, but WHEN. 

It's very easy to get started with a web security program. There's a lot of help available to move you along the process. You just need to take that first step.


 
 Subscribe to the RSS for this page  [view all our RSS feeds here]