Informazioni personali

Cerca nel blog

Translate

venerdì 30 settembre 2011

SOA, Cloud and the network–part 1

Layer interaction in service-oriented Architecture

Image via Wikipedia

It is now a quite very long time we talk about new architectures for our environment.
What is leading the way, nowadays, is talking about SOA and Cloud, but what do really means for us implementing those architecture in our networks?
One of the problem I’ve noticed when talking with customers and partners is that they usually try to use the same techniques they used for the old network deployment to the new ones. this is a mistake for several reasons, but for a mere philosophical point of view make a little, if not at all, sense to apply old rules to new ideas.
So what has really changed in those approach (cloud and SOA) that will require us to shift our way to project ad deploy networks?
Let’s say there are some evident changes, first of all the topology of connection has been dramatically modified. when once we could simply think of an identity between user or service, and relative IP address this is not more possible.
The reason behind this are easily found in both client and server side of this equation.
No more physical servers location
virtualization simply change the rules of the game, braking the identity between the physical location of a server and the service provided. this is a huge change in the way we should plan and deliver our service.
The classic structure was something like that:

The service used to be provider by one or more servers with a physical defined location and IP.
The client usually shared the same configuration with a well defined physical location and a fixed IP (or an address taken form a well defined pool).
With this situation was relatively simple to define rules of access and security.
User where defined by the  membership to a specific directory group (Active Directory or LDAP or …who really cares?) as well as client computer was identified and identified by it’s IP range.
From a service delivery and security perspective this was translated in two completely separated set of activities:
The owner of the network used to set delivery and security rules based on IP and MAC address, creating table to allow or block access to physical locations defined by it’s IP range. Tis way there was a sort of identity between IP structure and topology that was then copied at upper layer by software services.
The owner of the service was, at the same time, able to ignore the network structure and limit the relative security and delivery to the authentication of the requester, providing a set of different access to the different layer or services provided by the software.
This approach lead information technology for decades, ten something happened: the disruptive introduction of virtualization.
Virtualization has been a successful technology because of the promise of lower the TCO of our networks.
The original idea was to abstract the physical server from OS and application, making the physical server able to run multiple different instances.
The advantage was a standard physical layer interface seen by OS (no more drivers nightmares, bios upgrade pain and stuffs like this) and the possibility to reduce the overall number of physical devices running more instance on one Hardware.
The increasing power of hardware platforms made this approach approach successful, but at the beginning the virtualization technique was just used to hide the physical server and nothing more.

Nothing were really changed here, beside the fact that more services were running on the same physical platform.
But changing technology create new needs and so the virtual infrastructures evolved to something completely new.
Basically the abstraction layer provided by the virtual environment has been expanded in order to offer a complete abstraction from the physical layer topology. Nowadays virtual environment allow to have virtual environment running  as an unique environment on different HW and different locations, at the same time the services running inside this environment are able to move from an hardware structure to another one just according the required computational needs, for the same reason instances can be created on the fly by the service layer or the virtual environment layer.
This is a radical change in the design of applications, security and networks. While before a simple IP was a good token to recognize a physical layer, a virtual one and a service one, now everything is more complex.
From a logical point of view it is clear that the problem in design is that we have multiple required connection inside the virtual environment, the entities inside the virtual environment can create complex relationship between them (think of a classic SOA implementation) as well they need to instance the physical layer.
There are obvious problems related to authentication, identity flaw control, network control and monitoring inside the virtual environment as well as the interaction with the physical environment. In a single Datacenter the physical backplane and the communication between the physical servers is usually a problem solved with datacenter specific technologies as Unified computing by cisco.
Actually the situation is a way more complex if we consider a geographical implementation as it is used to build SaaS or cloud architectures.
Different environment can be located in different datacenter able to offer a single virtual environment.
Application living in the virtual severs can be located anywhere and change location upon request or load requirement.

In this situation we add another complexity to the structure, since the virtual layer needs physical geographical connections that emulate the single virtual environment, and at the same times applications need to communicate outside and inside their virtual environment.
The physical network layer need to manage several different kinds of traffic: the communication between the virtual layer units, the communication between different services that can be in need to communicate outside the virtual environment (typical SOA requirement) and the communication with client requiring service (we’ll explode this further in a few).

This kind of situation is typical in cloud implementation where the physical location of the provided service should not influence the client experience no matter where it is.
In a typical SOA implementation we add a new level of complexity since the service provided can be generated by different unit that can be stored generated and delivered in different fashion.

This kind of complexity is hard to manage with traditional techniques. the first thing that we have to realize is that we need to extend the control inside the virtual environment and its units form a network , authentication and identity point of view.
Since the post is not strictly on SOA architecture I would not go deeper on the modules authentication and security needs and I will talk generally of some network requirements.
Any service that need to communicate with another inside or outside the the virtual environment trough a network protocol (TCPIP v4 or TCPIP v6) usually need to be provided with some sort of connection link. this can be provided by a physical switch or a virtual one running in the virtual environment. using a physical switch can be, apparently, a great solution, in terms of performances and security. this is actually a misconception for several reasons:
First of all the communication outside the virtual environment require an overload to both the service and the virtual environment, if we widen the structure in a geographical scale this overload can be barley manageable.
Second aspect to keep in mind is that some network attack in this situation are easier since the real communicator is hided by the virtual shield. impersonating a service and access data is so not a remote threat.
If the physical cannot scale well, the virtual one has, on the other side, another set of  problems: resource consumptions (cpu and network latency for instance) the need to interface with the physical environment, a non matching vlan system and so on.
The problem is to overcome those limitation and keep the good from the two solutions
The solution the market is presenting nowadays is the integration between a virtual switch layer with a physical one datacenter scalable.
The idea is to have a single switch with two faces, one in the virtual world and one on the physical world. Cisco Nexus is a good example of this kind of approach.
As well as the switching similar requirement are related to firewalling. Since what happen inside a virtual environment is in a sort of black box from the outside world, keeping a security eye to check if the correct communication are in place an nothing strange happen is mandatory. Again we  have a dichotomy between the physical and virtual world, the solution nowadays is to adopt a virtual firewall able to deal with internal virtual environment communications. A good example can be found again in Cisco with VSG and Virtual ASA.
Cisco VSG Security in a Dynamic VM Environment, Including VM Live Migration


Basically this kind of solutions address two needs: manage and secure virtual internal traffic, and give an interface from the physical world to the virtual one and vice versa.
Alas this is only one part of the equation, since if from one side we have the problem to control manage and deploy the services we want to provide, on the pother end we have the problem to deliver those service to someone who can use it.
Here the problem again is evolving due to several factors: the vanishing of the physical borders of our networks, the consumerization of browser capable devices, the shift in use from simple data to rich context aware multimedia contents, just to name a few.
Users try to access resources from anywhere with different devices and we are barely able to know from where they will connect to the resources.
the initial situation was relatively easy to manage, as for the server also the client were easily locable. an IP address was more than enough to build a trust relationship between client and server.

With the Datacenter consolidation the number of servers and devices growth, but again with a limited presence of remote users the location of both side were quite easy understandable. The introduction of vlan technologies, stateful inspection firewall, the use of L3L4 switches, the pervasive use of access lists were addressing (at least apparently) most of the issues.

The virtualization opened a break into this structure introducing a first layer of indetermination, virtual servers and services where not physically defined by the IP, since the could share the same physical location.

while adding complexity from “server” side, also the client side were expanding with an higher presence of remote users and the introduction of new services on the network (who does not have an IP phone nowadays?)

more devices means more network requirements, and so datacenter complexity, thanks to the virtual technology, expanded beyond the physical constrain of a single physical location. as we discussed before this lead to a series of problems that were paired with the expansion form the client side of remote and local  users using different devices.

And then comes the cloud, and the final vanishing of any physical predetermined location for our client and our services.

Client and server side so evolved in an interconnect way, but network components and design were not always following this thread.
Using old fashion access lists, IP based network policies, Static VLan Assignment to manage this situation create a level of complexity that makes things unmanageable. nowadays firewalls require thousands of rules to accomplish every dingle special need, alas we have all a lot of special needs.
It’s clear to me in this situation that we need to shift from a static design to a dynamic one, able to accomplish the different needs of this evolving environment. A technology like Cisco Trustsec address those kind of requests, using SGT (Secure Group Tagging) basically dynamically assign vlan membership upon user identity, regardless IP or location, driving the packets to destination accordingly to the needs, and encrypting the network communication. To drive correctly the traffic regardless the IP is a mandatory requirement in a dynamic Cloud or SOA environment.
As important as driving correctly the network traffic there is also the need to determine witch kind of access we want to assign, we have plenty of devices like tablets, smartphones, laptop, ip phones, printers, scanners, physical security devices, medical equipment that need to access somehow our services and need to be authorized on the network. Using a Network Access service is mandatory as well to be able to correctly filter the devices, both on wireless and wired networks (think of what happened recently in Seattle to understand this kind of need). Again we can think of a cisco product like ISEto accomplish this.

End of part 1

SOA, Cloud and the network–part 1

Layer interaction in service-oriented Architecture

Image via Wikipedia

It is now a quite very long time we talk about new architectures for our environment.
What is leading the way, nowadays, is talking about SOA and Cloud, but what do really means for us implementing those architecture in our networks?
One of the problem I’ve noticed when talking with customers and partners is that they usually try to use the same techniques they used for the old network deployment to the new ones. this is a mistake for several reasons, but for a mere philosophical point of view make a little, if not at all, sense to apply old rules to new ideas.
So what has really changed in those approach (cloud and SOA) that will require us to shift our way to project ad deploy networks?
Let’s say there are some evident changes, first of all the topology of connection has been dramatically modified. when once we could simply think of an identity between user or service, and relative IP address this is not more possible.
The reason behind this are easily found in both client and server side of this equation.
No more physical servers location
virtualization simply change the rules of the game, braking the identity between the physical location of a server and the service provided. this is a huge change in the way we should plan and deliver our service.
The classic structure was something like that:

The service used to be provider by one or more servers with a physical defined location and IP.
The client usually shared the same configuration with a well defined physical location and a fixed IP (or an address taken form a well defined pool).
With this situation was relatively simple to define rules of access and security.
User where defined by the  membership to a specific directory group (Active Directory or LDAP or …who really cares?) as well as client computer was identified and identified by it’s IP range.
From a service delivery and security perspective this was translated in two completely separated set of activities:
The owner of the network used to set delivery and security rules based on IP and MAC address, creating table to allow or block access to physical locations defined by it’s IP range. Tis way there was a sort of identity between IP structure and topology that was then copied at upper layer by software services.
The owner of the service was, at the same time, able to ignore the network structure and limit the relative security and delivery to the authentication of the requester, providing a set of different access to the different layer or services provided by the software.
This approach lead information technology for decades, ten something happened: the disruptive introduction of virtualization.
Virtualization has been a successful technology because of the promise of lower the TCO of our networks.
The original idea was to abstract the physical server from OS and application, making the physical server able to run multiple different instances.
The advantage was a standard physical layer interface seen by OS (no more drivers nightmares, bios upgrade pain and stuffs like this) and the possibility to reduce the overall number of physical devices running more instance on one Hardware.
The increasing power of hardware platforms made this approach approach successful, but at the beginning the virtualization technique was just used to hide the physical server and nothing more.

Nothing were really changed here, beside the fact that more services were running on the same physical platform.
But changing technology create new needs and so the virtual infrastructures evolved to something completely new.
Basically the abstraction layer provided by the virtual environment has been expanded in order to offer a complete abstraction from the physical layer topology. Nowadays virtual environment allow to have virtual environment running  as an unique environment on different HW and different locations, at the same time the services running inside this environment are able to move from an hardware structure to another one just according the required computational needs, for the same reason instances can be created on the fly by the service layer or the virtual environment layer.
This is a radical change in the design of applications, security and networks. While before a simple IP was a good token to recognize a physical layer, a virtual one and a service one, now everything is more complex.
From a logical point of view it is clear that the problem in design is that we have multiple required connection inside the virtual environment, the entities inside the virtual environment can create complex relationship between them (think of a classic SOA implementation) as well they need to instance the physical layer.
There are obvious problems related to authentication, identity flaw control, network control and monitoring inside the virtual environment as well as the interaction with the physical environment. In a single Datacenter the physical backplane and the communication between the physical servers is usually a problem solved with datacenter specific technologies as Unified computing by cisco.
Actually the situation is a way more complex if we consider a geographical implementation as it is used to build SaaS or cloud architectures.
Different environment can be located in different datacenter able to offer a single virtual environment.
Application living in the virtual severs can be located anywhere and change location upon request or load requirement.

In this situation we add another complexity to the structure, since the virtual layer needs physical geographical connections that emulate the single virtual environment, and at the same times applications need to communicate outside and inside their virtual environment.
The physical network layer need to manage several different kinds of traffic: the communication between the virtual layer units, the communication between different services that can be in need to communicate outside the virtual environment (typical SOA requirement) and the communication with client requiring service (we’ll explode this further in a few).

This kind of situation is typical in cloud implementation where the physical location of the provided service should not influence the client experience no matter where it is.
In a typical SOA implementation we add a new level of complexity since the service provided can be generated by different unit that can be stored generated and delivered in different fashion.

This kind of complexity is hard to manage with traditional techniques. the first thing that we have to realize is that we need to extend the control inside the virtual environment and its units form a network , authentication and identity point of view.
Since the post is not strictly on SOA architecture I would not go deeper on the modules authentication and security needs and I will talk generally of some network requirements.
Any service that need to communicate with another inside or outside the the virtual environment trough a network protocol (TCPIP v4 or TCPIP v6) usually need to be provided with some sort of connection link. this can be provided by a physical switch or a virtual one running in the virtual environment. using a physical switch can be, apparently, a great solution, in terms of performances and security. this is actually a misconception for several reasons:
First of all the communication outside the virtual environment require an overload to both the service and the virtual environment, if we widen the structure in a geographical scale this overload can be barley manageable.
Second aspect to keep in mind is that some network attack in this situation are easier since the real communicator is hided by the virtual shield. impersonating a service and access data is so not a remote threat.
If the physical cannot scale well, the virtual one has, on the other side, another set of  problems: resource consumptions (cpu and network latency for instance) the need to interface with the physical environment, a non matching vlan system and so on.
The problem is to overcome those limitation and keep the good from the two solutions
The solution the market is presenting nowadays is the integration between a virtual switch layer with a physical one datacenter scalable.
The idea is to have a single switch with two faces, one in the virtual world and one on the physical world. Cisco Nexus is a good example of this kind of approach.
As well as the switching similar requirement are related to firewalling. Since what happen inside a virtual environment is in a sort of black box from the outside world, keeping a security eye to check if the correct communication are in place an nothing strange happen is mandatory. Again we  have a dichotomy between the physical and virtual world, the solution nowadays is to adopt a virtual firewall able to deal with internal virtual environment communications. A good example can be found again in Cisco with VSG and Virtual ASA.
Cisco VSG Security in a Dynamic VM Environment, Including VM Live Migration


Basically this kind of solutions address two needs: manage and secure virtual internal traffic, and give an interface from the physical world to the virtual one and vice versa.
Alas this is only one part of the equation, since if from one side we have the problem to control manage and deploy the services we want to provide, on the pother end we have the problem to deliver those service to someone who can use it.
Here the problem again is evolving due to several factors: the vanishing of the physical borders of our networks, the consumerization of browser capable devices, the shift in use from simple data to rich context aware multimedia contents, just to name a few.
Users try to access resources from anywhere with different devices and we are barely able to know from where they will connect to the resources.
the initial situation was relatively easy to manage, as for the server also the client were easily locable. an IP address was more than enough to build a trust relationship between client and server.

With the Datacenter consolidation the number of servers and devices growth, but again with a limited presence of remote users the location of both side were quite easy understandable. The introduction of vlan technologies, stateful inspection firewall, the use of L3L4 switches, the pervasive use of access lists were addressing (at least apparently) most of the issues.

The virtualization opened a break into this structure introducing a first layer of indetermination, virtual servers and services where not physically defined by the IP, since the could share the same physical location.

while adding complexity from “server” side, also the client side were expanding with an higher presence of remote users and the introduction of new services on the network (who does not have an IP phone nowadays?)

more devices means more network requirements, and so datacenter complexity, thanks to the virtual technology, expanded beyond the physical constrain of a single physical location. as we discussed before this lead to a series of problems that were paired with the expansion form the client side of remote and local  users using different devices.

And then comes the cloud, and the final vanishing of any physical predetermined location for our client and our services.

Client and server side so evolved in an interconnect way, but network components and design were not always following this thread.
Using old fashion access lists, IP based network policies, Static VLan Assignment to manage this situation create a level of complexity that makes things unmanageable. nowadays firewalls require thousands of rules to accomplish every dingle special need, alas we have all a lot of special needs.
It’s clear to me in this situation that we need to shift from a static design to a dynamic one, able to accomplish the different needs of this evolving environment. A technology like Cisco Trustsec address those kind of requests, using SGT (Secure Group Tagging) basically dynamically assign vlan membership upon user identity, regardless IP or location, driving the packets to destination accordingly to the needs, and encrypting the network communication. To drive correctly the traffic regardless the IP is a mandatory requirement in a dynamic Cloud or SOA environment.
As important as driving correctly the network traffic there is also the need to determine witch kind of access we want to assign, we have plenty of devices like tablets, smartphones, laptop, ip phones, printers, scanners, physical security devices, medical equipment that need to access somehow our services and need to be authorized on the network. Using a Network Access service is mandatory as well to be able to correctly filter the devices, both on wireless and wired networks (think of what happened recently in Seattle to understand this kind of need). Again we can think of a cisco product like ISEto accomplish this.

End of part 1

giovedì 22 settembre 2011

Forensic Software Tools

logo of National Institute of Standards and Te...Image via Wikipedia

Forensic Software Tools

This post summarizes the features and advantages of a large number of software forensics tools. For detailed information and technical reports it is always best to view the vendor Web sites as well as organizations that conduct technical reviews and evaluations such as National Institute of Standards and Technology (NIST).

The Computer Forensic Tools Testing project (CFTT) web site contains additional valuable information:
  • http://www.cftt.nist.gov/disk_imaging.htm
  • http://www.cftt.nist.gov/presentations.htm
  • http://www.cftt.nist.gov/software_write_block.htm
The information presented in this chapter is heavily based on the assertions of the various vendors who make the products listed in the chapter. Much of the information has been taken from the vendors product sheets. The Computer Forensic Tools Testing project is a good source of comparative data when deciding between these vendors.

Some of the tools I plan to discuss in my future posts are:

  • Visual TimeAnalyzer
  • X-Ways Forensics
  • Evidor
  • DriveSpy
  • Ontrack
Stay tuned to know more about these 🙂


View article…

Related articles
  • NIST Releases Federal Cloud Roadmap, Architecture (informationweek.com)
  • 4 new reports update Security Content Automation Protocol (eurekalert.org)
  • 2 new publications provide a cloud computing standards roadmap and reference architecture (eurekalert.org)
  • NIST Releases Federal Risk Assessment Guide (informationweek.com)
  • FISMA Draft Guide for Conducting Risk Assessments Released (infosecurity.us)
  • NIST’s NICE (infosecurity.us)
  • NIST’s Guide for Conducting Risk Assessments (superconductor.voltage.com)
  • NIST Needs NICE Notes (novainfosecportal.com)
  • NIST Publishes Cloud Computing Roadmap (infosecurity.us)
  • NIST tests help ensure reliable wireless alarm beacons for first responders (physorg.com)
  • Symantec, VMware Pledge Secure Desktop-As-A-Service Offering (portadiferro2.blogspot.com)
  • IT Security & Network Security News & Reviews: Google’s Expanded Privacy Tools … (portadiferro2.blogspot.com)
  • Hacker to Demonstrate ‘Weak’ Mobile Internet Security (portadiferro2.blogspot.com)
  • Report: Chinese military developing cyberwarfare tools (portadiferro2.blogspot.com)
  • IT Security Pros Worry About APTs, but Can’t Change User Behavior (portadiferro2.blogspot.com)
  • HP expands enterprise security portfolio with digital vaccine toolkit (portadiferro2.blogspot.com)
  • SpyEye Hacking Tool Now Accessible To The Criminal Masses (portadiferro2.blogspot.com)
  • Advanced Malware, Targeted Attacks Compromise Enterprises via … (portadiferro2.blogspot.com)
  • Degree of foresight as police forces learn to tackle cybercrime (portadiferro2.blogspot.com)
  • Don’t Blame the Domain: Where Internet Bad Boys Aren’t Hiding (portadiferro2.blogspot.com)

Forensic Software Tools

logo of National Institute of Standards and Te...Image via Wikipedia

Forensic Software Tools

This post summarizes the features and advantages of a large number of software forensics tools. For detailed information and technical reports it is always best to view the vendor Web sites as well as organizations that conduct technical reviews and evaluations such as National Institute of Standards and Technology (NIST).

The Computer Forensic Tools Testing project (CFTT) web site contains additional valuable information:
  • http://www.cftt.nist.gov/disk_imaging.htm
  • http://www.cftt.nist.gov/presentations.htm
  • http://www.cftt.nist.gov/software_write_block.htm
The information presented in this chapter is heavily based on the assertions of the various vendors who make the products listed in the chapter. Much of the information has been taken from the vendors product sheets. The Computer Forensic Tools Testing project is a good source of comparative data when deciding between these vendors.

Some of the tools I plan to discuss in my future posts are:

  • Visual TimeAnalyzer
  • X-Ways Forensics
  • Evidor
  • DriveSpy
  • Ontrack
Stay tuned to know more about these 🙂


View article…

Related articles
  • NIST Releases Federal Cloud Roadmap, Architecture (informationweek.com)
  • 4 new reports update Security Content Automation Protocol (eurekalert.org)
  • 2 new publications provide a cloud computing standards roadmap and reference architecture (eurekalert.org)
  • NIST Releases Federal Risk Assessment Guide (informationweek.com)
  • FISMA Draft Guide for Conducting Risk Assessments Released (infosecurity.us)
  • NIST’s NICE (infosecurity.us)
  • NIST’s Guide for Conducting Risk Assessments (superconductor.voltage.com)
  • NIST Needs NICE Notes (novainfosecportal.com)
  • NIST Publishes Cloud Computing Roadmap (infosecurity.us)
  • NIST tests help ensure reliable wireless alarm beacons for first responders (physorg.com)
  • Symantec, VMware Pledge Secure Desktop-As-A-Service Offering (portadiferro2.blogspot.com)
  • IT Security & Network Security News & Reviews: Google’s Expanded Privacy Tools … (portadiferro2.blogspot.com)
  • Hacker to Demonstrate ‘Weak’ Mobile Internet Security (portadiferro2.blogspot.com)
  • Report: Chinese military developing cyberwarfare tools (portadiferro2.blogspot.com)
  • IT Security Pros Worry About APTs, but Can’t Change User Behavior (portadiferro2.blogspot.com)
  • HP expands enterprise security portfolio with digital vaccine toolkit (portadiferro2.blogspot.com)
  • SpyEye Hacking Tool Now Accessible To The Criminal Masses (portadiferro2.blogspot.com)
  • Advanced Malware, Targeted Attacks Compromise Enterprises via … (portadiferro2.blogspot.com)
  • Degree of foresight as police forces learn to tackle cybercrime (portadiferro2.blogspot.com)
  • Don’t Blame the Domain: Where Internet Bad Boys Aren’t Hiding (portadiferro2.blogspot.com)

venerdì 16 settembre 2011

A.I. Talking Points–Security Week Review


Is not easy to make a summon of what happened in this crazy market every week. Not because there are not enough arguments to explore but, in effect, because there are too much.
So let try to find a way to summarize what I’ve found relevant.

Mobile security isn’t just for geeks

Although many still does not think about mobile security as a real problem in nowadays business, people should try to think better at the actual landscape. 
Let’s focus on some main points that also this week have been clearly exposed by news: mobile means a lot of different things smartphones, phones, tablets, laptop and other weird devices.
And security means to protect data, communications, privacy and confidentiality.
So what we have had here is the exposure of private data taken from hacked communication devices. As for the case of Scarlett Johansson or for the Rupert Murdoch’s News of the World hacking scandal there is a common line: those devices must to be protected, and anyone is at risk of exposure.
Of course other risks comes out related to the explosion of malware in modern devices, so the old threats that used to be related with PC are now transferred to “any device, anywhere” so Be careful when searching Heidi Klum online .
Financial malware is one of the best pieces, it can run on your device while you do our home banking and have your data steeled…

Big banks and Companies are in trouble, hack is waiting out there (…do they realize this?)

From “bitcoin” to “fireeye” malware is spreading, and even Stuxnet and Zeus are coming back. Report says that cyber criminal activities costs billion to our suffering economy, but targets are widely underestimating risks, approach to security is still based on a traditional approach that not take in count the different landscape. But evidences shows us that everything changed those last years.  Just to make it clear should the last hacking to defense companies all over the world (think of Mitsubishi one just to name one of the latest) and the consequences as Diginotar bankruptcy make our mind clear?

Four kind of guys with the same weapons

Cyberterrorims, Cyberwarfare, Cyber Activism and Cyber criminality are 4 aspect of the same medal (but…how many faces they have?)
People with different skills, target and motivation seems to act accordingly. the truth is that they just use the same weapons and sometimes they have the same target but with very different reasons. Different reasons means also different practices, so while cyber activists choose “political” targets , cyber terrorists (or patriots depend the side you’re on, think of Comodo hacker who claims hacked Diginotar) follow a different agenda. But being target of different groups with different needs should make us think about what we have to protect in a different way.  Change the rules would be a better way to play

Related articles
  • FBI investigating hacking of celebrities | InSecurity Complex – CNET … (portadiferro2.blogspot.com)
  • Fake Certificates Reveal Flaws in the Internet’s Security (portadiferro2.blogspot.com)
  • Anti-virus firms push security software for mobile devices (usatoday.com)
  • Scarlett Johansson Hacked! 5 Must-Read Mobile Safety Tips (self.com)
  • Despite “Year of the hack,” risky security behavior common … (portadiferro2.blogspot.com)
  • [Mobile Security App Shootout, Part 12] Webroot Mobile Security … (portadiferro2.blogspot.com)
  • Hollywood-Grade Mobile Phone Security: 4 Tips (informationweek.com)
  • Researchers Hack Mobile Data Communications and other web security news (portadiferro2.blogspot.com)
  • Italian researcher finds more SCADA holes (portadiferro2.blogspot.com)
  • Enterprise Risk Management Hosts Mobile Device Security Event for ‘On the Move Professionals’ (prweb.com)
  • Damaka Introduces World’s First Mobile Client for Microsoft Office 365™ (prweb.com)
  • Analysis: The Desktop OS May Be Dying, Not the Desktop (readwriteweb.com)
  • Forrester: More than half of enterprises support consumer phones (gigaom.com)
  • Organizations Over-Confident About Security Strategy: Survey (portadiferro2.blogspot.com)
  • Trend Micro unveils next-gen mobile security solution for Android (intomobile.com)
  • Lenovo IdeaPad K1 Tablet Price in India and Specs – Dual Core Android 3.1 Tablet (priceofmobiles.wordpress.com)
  • Review: JoikuSpot Premium (allaboutsymbian.com)
  • Kensington Announces Some Great New Accessories (geardiary.com)
  • [Mobile Security App Shootout, Part 14] ESET Mobile Security RC Still In Development, Offers Strong Features Nonetheless (androidpolice.com)
  • Have You Tried Mobile Blogging to Your Groups? (casasugar.com)
  • Telmetrics Launches Ground-Breaking Mobile Call Tracking Solution Exclusively for Mobile Local Search Publishers and App Developers (prweb.com)
  • Casa Beta: FluffyCo Eco Mobiles (casasugar.com)
  • Mobile Manners: Dropped call (zdnet.com)
  • Millennials and Mobile (outwardmediablog.wordpress.com)
  • Free Webinar: Mobile Marketing for the Hospitality Industry (mathieson.typepad.com)
  • Android malware outsmarts bank security, steals accounts – Security … (portadiferro2.blogspot.com)
  • The Evolution of Malware | SecurityWeek.Com (portadiferro2.blogspot.com)
  • New DroidDream Variant Has Ability To Fight Off Other Malware … (portadiferro2.blogspot.com)
  • Infographic: Two Decades of Malware (portadiferro2.blogspot.com)
  • Nuclear warheads could be next Stuxnet target: Check Point (portadiferro2.blogspot.com)
  • Clarke: Outdated cyber defense leaves US open to attack (portadiferro2.blogspot.com)
  • Advanced Malware, Targeted Attacks Compromise Enterprises via … (portadiferro2.blogspot.com)
  • Android malware steals bank account details (portadiferro2.blogspot.com)
  • Why Diginotar may turn out more important than Stuxnet Securelist (portadiferro2.blogspot.com)
  • Cyber criminals targeting mobile devices (premierlinedirect.co.uk)
  • Take It from The Stars: Stop Your Phone from Being Hacked (mylookout.com)
  • 5 Ways To Fight Mobile Malware (informationweek.com)
  • Mobile Security With a Data Mining Solution: Lookout Releases API for App Stores (readwriteweb.com)
  • Mobile malware criminal command-and-control activity (portadiferro2.blogspot.com)
  • Missile and submarine secrets ‘may have been stolen’ in cyber attack on … (portadiferro2.blogspot.com)
  • Mitsubishi Heavy: No defense information hacked (portadiferro2.blogspot.com)
  • Cyber-espionage hits defence companies (portadiferro2.blogspot.com)
  • Military Contractor Mitsubishi Heavy Hit By Hack Attack (portadiferro2.blogspot.com)
  • Spam relating to #DigiNotar certificates is detected (portadiferro2.blogspot.com)
  • ComodoHacker Declares Private Cyber-War (portadiferro2.blogspot.com)
  • A Post Mortem on the Iranian DigiNotar Attack (portadiferro2.blogspot.com)
  • Debacle deepens for hacked SSL certificates issuer (portadiferro2.blogspot.com)
  • Comodo hacker claims credit for DigiNotar attack (portadiferro2.blogspot.com)

A.I. Talking Points–Security Week Review


Is not easy to make a summon of what happened in this crazy market every week. Not because there are not enough arguments to explore but, in effect, because there are too much.
So let try to find a way to summarize what I’ve found relevant.

Mobile security isn’t just for geeks

Although many still does not think about mobile security as a real problem in nowadays business, people should try to think better at the actual landscape. 
Let’s focus on some main points that also this week have been clearly exposed by news: mobile means a lot of different things smartphones, phones, tablets, laptop and other weird devices.
And security means to protect data, communications, privacy and confidentiality.
So what we have had here is the exposure of private data taken from hacked communication devices. As for the case of Scarlett Johansson or for the Rupert Murdoch’s News of the World hacking scandal there is a common line: those devices must to be protected, and anyone is at risk of exposure.
Of course other risks comes out related to the explosion of malware in modern devices, so the old threats that used to be related with PC are now transferred to “any device, anywhere” so Be careful when searching Heidi Klum online .
Financial malware is one of the best pieces, it can run on your device while you do our home banking and have your data steeled…

Big banks and Companies are in trouble, hack is waiting out there (…do they realize this?)

From “bitcoin” to “fireeye” malware is spreading, and even Stuxnet and Zeus are coming back. Report says that cyber criminal activities costs billion to our suffering economy, but targets are widely underestimating risks, approach to security is still based on a traditional approach that not take in count the different landscape. But evidences shows us that everything changed those last years.  Just to make it clear should the last hacking to defense companies all over the world (think of Mitsubishi one just to name one of the latest) and the consequences as Diginotar bankruptcy make our mind clear?

Four kind of guys with the same weapons

Cyberterrorims, Cyberwarfare, Cyber Activism and Cyber criminality are 4 aspect of the same medal (but…how many faces they have?)
People with different skills, target and motivation seems to act accordingly. the truth is that they just use the same weapons and sometimes they have the same target but with very different reasons. Different reasons means also different practices, so while cyber activists choose “political” targets , cyber terrorists (or patriots depend the side you’re on, think of Comodo hacker who claims hacked Diginotar) follow a different agenda. But being target of different groups with different needs should make us think about what we have to protect in a different way.  Change the rules would be a better way to play

Related articles
  • FBI investigating hacking of celebrities | InSecurity Complex – CNET … (portadiferro2.blogspot.com)
  • Fake Certificates Reveal Flaws in the Internet’s Security (portadiferro2.blogspot.com)
  • Anti-virus firms push security software for mobile devices (usatoday.com)
  • Scarlett Johansson Hacked! 5 Must-Read Mobile Safety Tips (self.com)
  • Despite “Year of the hack,” risky security behavior common … (portadiferro2.blogspot.com)
  • [Mobile Security App Shootout, Part 12] Webroot Mobile Security … (portadiferro2.blogspot.com)
  • Hollywood-Grade Mobile Phone Security: 4 Tips (informationweek.com)
  • Researchers Hack Mobile Data Communications and other web security news (portadiferro2.blogspot.com)
  • Italian researcher finds more SCADA holes (portadiferro2.blogspot.com)
  • Enterprise Risk Management Hosts Mobile Device Security Event for ‘On the Move Professionals’ (prweb.com)
  • Damaka Introduces World’s First Mobile Client for Microsoft Office 365™ (prweb.com)
  • Analysis: The Desktop OS May Be Dying, Not the Desktop (readwriteweb.com)
  • Forrester: More than half of enterprises support consumer phones (gigaom.com)
  • Organizations Over-Confident About Security Strategy: Survey (portadiferro2.blogspot.com)
  • Trend Micro unveils next-gen mobile security solution for Android (intomobile.com)
  • Lenovo IdeaPad K1 Tablet Price in India and Specs – Dual Core Android 3.1 Tablet (priceofmobiles.wordpress.com)
  • Review: JoikuSpot Premium (allaboutsymbian.com)
  • Kensington Announces Some Great New Accessories (geardiary.com)
  • [Mobile Security App Shootout, Part 14] ESET Mobile Security RC Still In Development, Offers Strong Features Nonetheless (androidpolice.com)
  • Have You Tried Mobile Blogging to Your Groups? (casasugar.com)
  • Telmetrics Launches Ground-Breaking Mobile Call Tracking Solution Exclusively for Mobile Local Search Publishers and App Developers (prweb.com)
  • Casa Beta: FluffyCo Eco Mobiles (casasugar.com)
  • Mobile Manners: Dropped call (zdnet.com)
  • Millennials and Mobile (outwardmediablog.wordpress.com)
  • Free Webinar: Mobile Marketing for the Hospitality Industry (mathieson.typepad.com)
  • Android malware outsmarts bank security, steals accounts – Security … (portadiferro2.blogspot.com)
  • The Evolution of Malware | SecurityWeek.Com (portadiferro2.blogspot.com)
  • New DroidDream Variant Has Ability To Fight Off Other Malware … (portadiferro2.blogspot.com)
  • Infographic: Two Decades of Malware (portadiferro2.blogspot.com)
  • Nuclear warheads could be next Stuxnet target: Check Point (portadiferro2.blogspot.com)
  • Clarke: Outdated cyber defense leaves US open to attack (portadiferro2.blogspot.com)
  • Advanced Malware, Targeted Attacks Compromise Enterprises via … (portadiferro2.blogspot.com)
  • Android malware steals bank account details (portadiferro2.blogspot.com)
  • Why Diginotar may turn out more important than Stuxnet Securelist (portadiferro2.blogspot.com)
  • Cyber criminals targeting mobile devices (premierlinedirect.co.uk)
  • Take It from The Stars: Stop Your Phone from Being Hacked (mylookout.com)
  • 5 Ways To Fight Mobile Malware (informationweek.com)
  • Mobile Security With a Data Mining Solution: Lookout Releases API for App Stores (readwriteweb.com)
  • Mobile malware criminal command-and-control activity (portadiferro2.blogspot.com)
  • Missile and submarine secrets ‘may have been stolen’ in cyber attack on … (portadiferro2.blogspot.com)
  • Mitsubishi Heavy: No defense information hacked (portadiferro2.blogspot.com)
  • Cyber-espionage hits defence companies (portadiferro2.blogspot.com)
  • Military Contractor Mitsubishi Heavy Hit By Hack Attack (portadiferro2.blogspot.com)
  • Spam relating to #DigiNotar certificates is detected (portadiferro2.blogspot.com)
  • ComodoHacker Declares Private Cyber-War (portadiferro2.blogspot.com)
  • A Post Mortem on the Iranian DigiNotar Attack (portadiferro2.blogspot.com)
  • Debacle deepens for hacked SSL certificates issuer (portadiferro2.blogspot.com)
  • Comodo hacker claims credit for DigiNotar attack (portadiferro2.blogspot.com)