Wishing you and your beloved a joyous holiday season and a peaceful and prosperous New Year.
Best wishes
Antonio
Wishing you and your beloved a joyous holiday season and a peaceful and prosperous New Year.
Best wishes
Antonio
Wishing you and your beloved a joyous holiday season and a peaceful and prosperous New Year.
Best wishes
Antonio
<blockquote class=”twitter-tweet” data-lang=”en”><p lang=”en” dir=”ltr”>Missed the conference? Learn how to pay down your <a href=”https://twitter.com/hashtag/cybersecurity?src=hash”>#cybersecurity</a> debt <a href=”https://twitter.com/PrivacyProf”>@PrivacyProf</a>, <a href=”https://twitter.com/Antonioierano”>@AntonioIerano</a> & <a href=”https://twitter.com/MagdaCHELLY”>@MagdaChelly</a> <a href=”https://t.co/h1920sHGPF”>https://t.co/h1920sHGPF</a> <a href=”https://t.co/SqRdtrOjRR”>pic.twitter.com/SqRdtrOjRR</a></p>— Data Privacy Asia (@dataprivacyasia) <a href=”https://twitter.com/dataprivacyasia/status/809517859110080513″>December 15, 2016</a></blockquote>
<script async src=”//platform.twitter.com/widgets.js” charset=”utf-8″></script>
I hear a lot of people talking a lot about “solutions selling”, all keep telling us they are moving to that area, but do we really understand what “solution selling” means?
Solution selling does not mean we have to sell “a solution” in terms of a complicated architecture or a set of interconnected boxes or whatsoever.
Solution selling is a sales methodology. Rather than just promoting an existing product, the salesperson focuses on the customer’s pain(s) and addresses the issue with his or her offerings (product and services). The resolution of the pain is what is a “solution”.
In a “solution selling” approach is key to be able to understand customer pain points, and be able to relate those pain points to your offering.
This should be the common selling approach to be used in the IT market since 15 years (maybe more) and it should be common for 2 categories of vendors:
1) The big ones that want to scale and need recurring deals from their customer base
2) The small ones but with quality unique offerings, typically innovative startups.
That vendor that does not stay in these 2 definitions does not need, basically, a solution selling approach.
You do not need solution selling if you are a pure box seller.
The real difference between solution selling and box selling is the proactive approach that is required for the first selling methodology.
While a box seller can go from door to door offering its products, putting the minimum effort on convincing the customer and having a short immediate time view, the solution seller needs, first of all, to build a relationship in order to know its customer, and therefore the outlook can’t be immediate, but medium period, since creating a relationship require time.
So box selling advantage compared to solution selling is to have:
• Immediate revenue
• Minimum effort
But what is the solution selling advantage, if any? In short, the main advantages of a solution selling approach can be:
• Lower price pressure
• Recurring deals with the same customer
The reason for the lower price pressure is mainly related to the fact that in a solution selling approach targeting the pain point raises the value of the products solutions services proposed. Even in our consolidated technology market.
Of course, lower price pressure means higher margins, so it is understandable why so many IT ICT vendors historically moved to solution selling.
But better margins, per se, do not completely justify a solution selling approach. The aspect most interesting is the recurring deal possibility due to a better understanding of customer needs and relationships.
In the end, solution selling allows more healthy growth, better margins, and a better-used customer base.
But solution selling comes with a price; the biggest skill required is to be able to understand the customer.
If “solution selling” requires identifying customer pain(s), this means being able to understand the customer.
Understanding customers’ needs require a different approach, sounds silly but the first one is to be able to “listen to the customer and understand him-her”.
This requires being able to:
1) Understand the business issue
2) Being able to relate it to the technical aspect related to our offering.
The first point requires a business understanding that goes beyond the simple product. In order to solve a problem, you have to understand the problem. And to understand the problem you have to put yourself in your customer “shoes”.
The second point basically means to be able to have a technical approach that is not limited to the product specification, but how the product “lives” inside the customer environment.
Both points 1 and 2 require, usually, the involvement of 2 common sales roles:
The Customer Account Manager and the Pre-sales Engineer
Both roles are key in the solution selling approach because they are engines to understand the issue, translate it into a technical offering and communicate the value to the customer. While the first is usually the holder of the relationship and the economic interface the second is the “translator” from business need to products solution services offering.
Keeping the two roles separated is usually a good thing since a pre-sales engineer should not be seen as a “salesperson” in order to give her or him more technical credibility.
Things become more complicated, in an Enterprise environment, when we add to the equation the role of the channel.
How can, a vendor, add this value to its channel? Well basically this is done through 2 specific approaches:
Channel segmentation and channel education.
Since through channel the approach with the customer is not direct, what it is usually done is to provide to the channel shared resources that can fill the eventual gaps they have to implement a solution selling approach.
This is done, basically through channel specialization (vertical, product, certification), and channel support through sales specialists and presales channel engineers.
We already understood that the solution selling approach requires a different attitude when approaching the customer, but a solution selling approach means also the customers will act in a different way with us. The most evident aspect of this change is the necessity of Proof of Concept or POC.
Basically, from the customer side, the point is:
“Ok, I am buying a value from you, which will solve my pain point. But I need to be sure because I need this pain to be withdrawn so please I need you to demonstrate that:
1) You can actually solve my pain point
2) You will not generate more problems with the introduction of your offering”
This means, basically, that we have to prove what we say is the truth, and usually this is done by example. This means:
1) References when available
2) Proof of Concept
Sometimes proof of concept is just a demo, sometimes is a test in a virtual environment, and sometimes (it happened to me in the past) is a test in a live and running production environment.
So we should be so brave to accept the challenge and proof our customer we are trustable and we can actually help her him out. If we don’t do we risk losing our credibility and losing the customer, at least for the value space selling.
One of the other consequences related to the solution selling approach is the need for a different marketing approach.
While selling isolated boxes can give the focus on the box itself even from a marketing perspective, a solution approach requires more of anything else to build company credibility. In other terms, if you want to offer a solution for a pain point, the customer needs to trust you in terms of:
You are able to understand the pain,
and
You are able to solve the pain.
Those two aspects are not strictly product-related; therefore it is necessary to change the communication approach, moving toward a more “institutional” one.
This communication needs to target 2 different audiences:
1) Potential customers
2) Partners resellers
This is why usually it is common to have 2 different but integrated communication plans.
If you are a box seller, no doubts, you have to start the ground to move from box to solution.
It is interesting to notice that the Solution selling approach is not mutually exclusive to the box seller one; they are just two aspects of the selling activity of an IT-ICT vendor.
Focus on vertical will require, sooner or later, to change the generalist approach used as box seller to a more targeted approach where you start focusing on qualified salespeople (with a deep understanding of specific verticals) and the introduction of a skilled pre-sales figure that is still missing in action if you look for inexperienced young and cheap rookies.
You will have, at the same time, yet a lot of things to do in terms of your MKTG approach and, I am afraid, in terms of people management.
But the good news is we have a lot of space for improvement.
happy selling
I hear a lot of people talking a lot about “solutions selling”, all keep telling us they are moving to that area, but do we really understand what “solution selling” means?
Solution selling does not mean we have to sell “a solution” in terms of a complicated architecture or a set of interconnected boxes or whatsoever.
Solution selling is a sales methodology. Rather than just promoting an existing product, the salesperson focuses on the customer’s pain(s) and addresses the issue with his or her offerings (product and services). The resolution of the pain is what is a “solution”.
In a “solution selling” approach is key to be able to understand customer pain points, and be able to relate those pain points to your offering.
This should be the common selling approach to be used in the IT market since 15 years (maybe more) and it should be common for 2 categories of vendors:
1) The big ones that want to scale and need recurring deals from their customer base
2) The small ones but with quality unique offerings, typically innovative startups.
That vendor that does not stay in these 2 definitions does not need, basically, a solution selling approach.
You do not need solution selling if you are a pure box seller.
The real difference between solution selling and box selling is the proactive approach that is required for the first selling methodology.
While a box seller can go from door to door offering its products, putting the minimum effort on convincing the customer and having a short immediate time view, the solution seller needs, first of all, to build a relationship in order to know its customer, and therefore the outlook can’t be immediate, but medium period, since creating a relationship require time.
So box selling advantage compared to solution selling is to have:
• Immediate revenue
• Minimum effort
But what is the solution selling advantage, if any? In short, the main advantages of a solution selling approach can be:
• Lower price pressure
• Recurring deals with the same customer
The reason for the lower price pressure is mainly related to the fact that in a solution selling approach targeting the pain point raises the value of the products solutions services proposed. Even in our consolidated technology market.
Of course, lower price pressure means higher margins, so it is understandable why so many IT ICT vendors historically moved to solution selling.
But better margins, per se, do not completely justify a solution selling approach. The aspect most interesting is the recurring deal possibility due to a better understanding of customer needs and relationships.
In the end, solution selling allows more healthy growth, better margins, and a better-used customer base.
But solution selling comes with a price; the biggest skill required is to be able to understand the customer.
If “solution selling” requires identifying customer pain(s), this means being able to understand the customer.
Understanding customers’ needs require a different approach, sounds silly but the first one is to be able to “listen to the customer and understand him-her”.
This requires being able to:
1) Understand the business issue
2) Being able to relate it to the technical aspect related to our offering.
The first point requires a business understanding that goes beyond the simple product. In order to solve a problem, you have to understand the problem. And to understand the problem you have to put yourself in your customer “shoes”.
The second point basically means to be able to have a technical approach that is not limited to the product specification, but how the product “lives” inside the customer environment.
Both points 1 and 2 require, usually, the involvement of 2 common sales roles:
The Customer Account Manager and the Pre-sales Engineer
Both roles are key in the solution selling approach because they are engines to understand the issue, translate it into a technical offering and communicate the value to the customer. While the first is usually the holder of the relationship and the economic interface the second is the “translator” from business need to products solution services offering.
Keeping the two roles separated is usually a good thing since a pre-sales engineer should not be seen as a “salesperson” in order to give her or him more technical credibility.
Things become more complicated, in an Enterprise environment, when we add to the equation the role of the channel.
How can, a vendor, add this value to its channel? Well basically this is done through 2 specific approaches:
Channel segmentation and channel education.
Since through channel the approach with the customer is not direct, what it is usually done is to provide to the channel shared resources that can fill the eventual gaps they have to implement a solution selling approach.
This is done, basically through channel specialization (vertical, product, certification), and channel support through sales specialists and presales channel engineers.
We already understood that the solution selling approach requires a different attitude when approaching the customer, but a solution selling approach means also the customers will act in a different way with us. The most evident aspect of this change is the necessity of Proof of Concept or POC.
Basically, from the customer side, the point is:
“Ok, I am buying a value from you, which will solve my pain point. But I need to be sure because I need this pain to be withdrawn so please I need you to demonstrate that:
1) You can actually solve my pain point
2) You will not generate more problems with the introduction of your offering”
This means, basically, that we have to prove what we say is the truth, and usually this is done by example. This means:
1) References when available
2) Proof of Concept
Sometimes proof of concept is just a demo, sometimes is a test in a virtual environment, and sometimes (it happened to me in the past) is a test in a live and running production environment.
So we should be so brave to accept the challenge and proof our customer we are trustable and we can actually help her him out. If we don’t do we risk losing our credibility and losing the customer, at least for the value space selling.
One of the other consequences related to the solution selling approach is the need for a different marketing approach.
While selling isolated boxes can give the focus on the box itself even from a marketing perspective, a solution approach requires more of anything else to build company credibility. In other terms, if you want to offer a solution for a pain point, the customer needs to trust you in terms of:
You are able to understand the pain,
and
You are able to solve the pain.
Those two aspects are not strictly product-related; therefore it is necessary to change the communication approach, moving toward a more “institutional” one.
This communication needs to target 2 different audiences:
1) Potential customers
2) Partners resellers
This is why usually it is common to have 2 different but integrated communication plans.
If you are a box seller, no doubts, you have to start the ground to move from box to solution.
It is interesting to notice that the Solution selling approach is not mutually exclusive to the box seller one; they are just two aspects of the selling activity of an IT-ICT vendor.
Focus on vertical will require, sooner or later, to change the generalist approach used as box seller to a more targeted approach where you start focusing on qualified salespeople (with a deep understanding of specific verticals) and the introduction of a skilled pre-sales figure that is still missing in action if you look for inexperienced young and cheap rookies.
You will have, at the same time, yet a lot of things to do in terms of your MKTG approach and, I am afraid, in terms of people management.
But the good news is we have a lot of space for improvement.
happy selling
One of the problem nowadays when we talk about firewalls is to understand what actually a firewall is and what means the acronym that are used to define the different type of firewalls.
The common definition today recognizes 3 main types of firewalls:
• Firewalls
• UTM
• NGFW
But what are the differences (if any) between those things?
Let’s start with the very basic: what a firewall is.
A firewall is software used to maintain the security of a private network. Firewalls block unauthorized access to or from private networks and are often employed to prevent unauthorized Web users or illicit software from gaining access to private networks connected to the Internet. A firewall may be implemented using hardware, software, or a combination of both.
A firewall is recognized as the first line of defense in securing sensitive information. For better safety, the data can be encrypted.
Firewalls generally use two or more of the following methods:
• Packet Filtering: Firewalls filter packets that attempt to enter or leave a network and either accept or reject them depending on the predefined set of filter rules.
• Application Gateway: The application gateway technique employs security methods applied to certain applications such as Telnet and File Transfer Protocol servers.
• Circuit-Level Gateway: A circuit-level gateway applies these methods when a connection such as Transmission Control Protocol is established and packets start to move.
• Proxy Servers: Proxy servers can mask real network addresses and intercept every message that enters or leaves a network.
• Stateful Inspection or Dynamic Packet Filtering: This method compares not just the header information, but also a packet’s most important inbound and outbound data parts. These are then compared to a trusted information database for characteristic matches. This determines whether the information is authorized to cross the firewall into the network.
The limit of the firewall itself is that works only on the protocol side (IPTCPUDP) without knowledge of higher level of risks that can cross the network.
From virus to content filtering there is a hundreds thousands different technologies that can complement firewall works in order to protect our resources.
To address the more complex security environment firewall evolved into something new, that cover different aspect above the simple protocol inspection. Those devices uses different technologies to address different aspect of security in one single box, the so called UTM (Unified Threat Management)
Unified threat management (UTM) refers to a specific kind of IT product that combines several key elements of network security to offer a comprehensive security package to buyers.
A unified threat management solution involves combining the utility of a firewall with other guards against unauthorized network traffic along with various filters and network maintenance tools, such as anti-virus programs.
The emergence of unified threat management is a relatively new phenomenon, because the various aspects that make up these products used to be sold separately. However, by selecting a UTM solution, businesses and organization can deal with just one vendor, which may be more efficient. Unified threat management solutions may also promote easier installation and updates for security systems, although others contend that a single point of access and security can be a liability in some cases.
UTM are gaining momentum but have, yet, a lack of understanding of the context and the users, therefore are not the best suit to address the new environments. In order to drive those gap security researchers moved onto upper layer and form protocol moved to applications, where user behavior and context are key.
This moved from UTM to the so called Next Generation Firewall or NGFW
A next-generation firewall (NGFW) is a hardware- or software-based network security system that is able to detect and block sophisticated attacks by enforcing security policies at the application level, as well as at the port and protocol level.
Next-generation firewalls integrate three key assets: enterprise firewall capabilities, an intrusion prevention system (IPS) and application control. Like the introduction of stateful inspection in first-generation firewalls, NGFWs bring additional context to the firewall’s decision-making process by providing it with the ability to understand the details of the Web application traffic passing through it and taking action to block traffic that might exploit vulnerabilities
Next-generation firewalls combine the capabilities of traditional firewalls — including packet filtering, network address translation (NAT), URL blocking and virtual private networks (VPNs) — with Quality of Service (QoS) functionality and features not traditionally found in firewall products.
These include intrusion prevention, SSL and SSH inspection, deep-packet inspection and reputation-based malware detection as well as application awareness. The application-specific capabilities are meant to thwart the growing number of application attacks taking place on layers 4-7 of the OSI network stack.
The simple definition of application control is the ability to detect an application based on the application’s content vs. the traditional layer 4 protocol. Since many application providers are moving to a Web-based delivery model, the ability to detect an application based on the content is important while working only at protocol level is almost worthless.
Yet in the market is still not easy to understand what an UTM is and what is a NGFW
Next-Generation Firewalls were defined by Gartner as a firewall with Application Control, User-Awareness and Intrusion Detection. So basically a NGFW is a firewall that move from creating rules based on IPport to a firewall that create its rules based on User, Application and other parameters.
The difference is, basically, the shift from the old TCPIP protocol model to a new UserApplicationContext one.
On the other end UTM are a mix of technologies that address different security aspect, from antivirus to content filtering, from web security to email security, all upon a firewall. Some of those technologies can require to be configured to recognize users while seldom deal with applications.
In the market the problem is that nowadays traditional firewall does not exist anymore, even in the area of personalhomesoho environment. Most of them are UTM based.
Quite most of the firewall vendors moves from old firewalls to either UTM or NGFW offering, in most of the case NGFW offer also UTM functions while most of the UTM added NGFW application control functions creating, de facto a new generation of product changing the landscape with the introduction of Next Generation UTM
UTM vendors and NGFW vendors keep fighting on what is the best solution in modern environment, but this is a marketing fight more than a technical sound discussion.
The real thing is that UTM and NGFW are becoming more and more the same thing.
Why security devices become so comprehensive and try to unify such a lot of services? Management is the last piece of the puzzle. In two separate studies, one by Gartner and one by Verizon Data’s Risk Analysis team, it was shown that an overwhelmingly large percentage of security breaches were caused by simple configuration errors. Gartner says “More than 95% of firewall breaches are caused by firewall misconfigurations, not firewall flaws.” Verizon’s estimate is even higher, at 96%. Both agree that the vast majority of our customers’ security problems are caused by implementing security products that are too difficult to use. The answer? Put it all in one place and make it easy to manage. The best security in the world is USELESS unless you can manage it effectively.
One of the problem nowadays when we talk about firewalls is to understand what actually a firewall is and what means the acronym that are used to define the different type of firewalls.
The common definition today recognizes 3 main types of firewalls:
• Firewalls
• UTM
• NGFW
But what are the differences (if any) between those things?
Let’s start with the very basic: what a firewall is.
A firewall is software used to maintain the security of a private network. Firewalls block unauthorized access to or from private networks and are often employed to prevent unauthorized Web users or illicit software from gaining access to private networks connected to the Internet. A firewall may be implemented using hardware, software, or a combination of both.
A firewall is recognized as the first line of defense in securing sensitive information. For better safety, the data can be encrypted.
Firewalls generally use two or more of the following methods:
• Packet Filtering: Firewalls filter packets that attempt to enter or leave a network and either accept or reject them depending on the predefined set of filter rules.
• Application Gateway: The application gateway technique employs security methods applied to certain applications such as Telnet and File Transfer Protocol servers.
• Circuit-Level Gateway: A circuit-level gateway applies these methods when a connection such as Transmission Control Protocol is established and packets start to move.
• Proxy Servers: Proxy servers can mask real network addresses and intercept every message that enters or leaves a network.
• Stateful Inspection or Dynamic Packet Filtering: This method compares not just the header information, but also a packet’s most important inbound and outbound data parts. These are then compared to a trusted information database for characteristic matches. This determines whether the information is authorized to cross the firewall into the network.
The limit of the firewall itself is that works only on the protocol side (IPTCPUDP) without knowledge of higher level of risks that can cross the network.
From virus to content filtering there is a hundreds thousands different technologies that can complement firewall works in order to protect our resources.
To address the more complex security environment firewall evolved into something new, that cover different aspect above the simple protocol inspection. Those devices uses different technologies to address different aspect of security in one single box, the so called UTM (Unified Threat Management)
Unified threat management (UTM) refers to a specific kind of IT product that combines several key elements of network security to offer a comprehensive security package to buyers.
A unified threat management solution involves combining the utility of a firewall with other guards against unauthorized network traffic along with various filters and network maintenance tools, such as anti-virus programs.
The emergence of unified threat management is a relatively new phenomenon, because the various aspects that make up these products used to be sold separately. However, by selecting a UTM solution, businesses and organization can deal with just one vendor, which may be more efficient. Unified threat management solutions may also promote easier installation and updates for security systems, although others contend that a single point of access and security can be a liability in some cases.
UTM are gaining momentum but have, yet, a lack of understanding of the context and the users, therefore are not the best suit to address the new environments. In order to drive those gap security researchers moved onto upper layer and form protocol moved to applications, where user behavior and context are key.
This moved from UTM to the so called Next Generation Firewall or NGFW
A next-generation firewall (NGFW) is a hardware- or software-based network security system that is able to detect and block sophisticated attacks by enforcing security policies at the application level, as well as at the port and protocol level.
Next-generation firewalls integrate three key assets: enterprise firewall capabilities, an intrusion prevention system (IPS) and application control. Like the introduction of stateful inspection in first-generation firewalls, NGFWs bring additional context to the firewall’s decision-making process by providing it with the ability to understand the details of the Web application traffic passing through it and taking action to block traffic that might exploit vulnerabilities
Next-generation firewalls combine the capabilities of traditional firewalls — including packet filtering, network address translation (NAT), URL blocking and virtual private networks (VPNs) — with Quality of Service (QoS) functionality and features not traditionally found in firewall products.
These include intrusion prevention, SSL and SSH inspection, deep-packet inspection and reputation-based malware detection as well as application awareness. The application-specific capabilities are meant to thwart the growing number of application attacks taking place on layers 4-7 of the OSI network stack.
The simple definition of application control is the ability to detect an application based on the application’s content vs. the traditional layer 4 protocol. Since many application providers are moving to a Web-based delivery model, the ability to detect an application based on the content is important while working only at protocol level is almost worthless.
Yet in the market is still not easy to understand what an UTM is and what is a NGFW
Next-Generation Firewalls were defined by Gartner as a firewall with Application Control, User-Awareness and Intrusion Detection. So basically a NGFW is a firewall that move from creating rules based on IPport to a firewall that create its rules based on User, Application and other parameters.
The difference is, basically, the shift from the old TCPIP protocol model to a new UserApplicationContext one.
On the other end UTM are a mix of technologies that address different security aspect, from antivirus to content filtering, from web security to email security, all upon a firewall. Some of those technologies can require to be configured to recognize users while seldom deal with applications.
In the market the problem is that nowadays traditional firewall does not exist anymore, even in the area of personalhomesoho environment. Most of them are UTM based.
Quite most of the firewall vendors moves from old firewalls to either UTM or NGFW offering, in most of the case NGFW offer also UTM functions while most of the UTM added NGFW application control functions creating, de facto a new generation of product changing the landscape with the introduction of Next Generation UTM
UTM vendors and NGFW vendors keep fighting on what is the best solution in modern environment, but this is a marketing fight more than a technical sound discussion.
The real thing is that UTM and NGFW are becoming more and more the same thing.
Why security devices become so comprehensive and try to unify such a lot of services? Management is the last piece of the puzzle. In two separate studies, one by Gartner and one by Verizon Data’s Risk Analysis team, it was shown that an overwhelmingly large percentage of security breaches were caused by simple configuration errors. Gartner says “More than 95% of firewall breaches are caused by firewall misconfigurations, not firewall flaws.” Verizon’s estimate is even higher, at 96%. Both agree that the vast majority of our customers’ security problems are caused by implementing security products that are too difficult to use. The answer? Put it all in one place and make it easy to manage. The best security in the world is USELESS unless you can manage it effectively.
Sono un poco preoccupato, perché la mia impressione è che in Italia, a fronte di una delle legislazioni più severe d’Europa e i nuovi vincoli introdotti od in via di introduzione dal GDPR, il concetto di privacy sia altamente sottovalutato.
Il problema ovviamente è insito nella storica sottovalutazione italica dell’impatto delle strutture informatiche all’interno dei processi produttivi, decisionali e manageriali.
Insomma non ci si interessa, non si capisce, e non si valuta. Di conseguenza non si correggono comportamenti errati e, allo stesso tempo, non si sfruttano le nuove possibilità rimanendo al palo delle nuove tecnologie con buona pace di chi (da Olivetti a Faggin, ma potremmo citare Marconi e Meucci) avevano fatto dell’Italia la piattaforma del nuovo.
Vabbè
Polemiche parte vediamo di capire con un esempio così semplice che persino un ufficio HR potrebbe capire, cosa significa gestire la privacy e la protezione del dato.
Immaginiamo che il vostro ufficio HR riceva un CV di un possibile candidato. Cosa non tanto strana in tempi in cui la ricerca del lavoro è fondamentale (anche io ne ho mandati in giro centinaia recentemente).
Immaginiamo anche che il CV arrivi via posta elettronica (cosa abbastanza consueta) e che giusto perché ci sono posizioni aperte il medesimo venga fatto girare tra qualche potenziale interessato. Ad esempio l’hiring manager.
Non mi interessa in questo momento sapere come finirà la storia dell’essere umano dietro quel pezzo di carta, con i suoi bisogni, aspirazioni e potenziale. Mi interessa proprio il pezzo di carta, virtuale.
Come potete immaginare quel pezzo di carta contiene dati personali, in quanto sono riferibili ad una persona fisica.
OPS va a finire che per questo devo trattarli in maniera coerente alle disposizioni di legge? Va a finire che il GDPR (qualunque cosa esso sia) viene convolto?
Temo proprio di si.
Allora in teoria, ammesso e non concesso che tu sia interessato in qualche maniera ad essere allineato ai dettami di legge, dovresti processare questi dati di conseguenza.
Non voglio qui fare una dissertazione di dettaglio sul GDPR, ma mi limito ad alcune considerazioni banalotte, giusto per aiutarti ad evitare una multa salta.
Il cv in questione probabilmente finirà in:
Ora siccome quel pezzo di carta (virtuale e non) contiene dati personali, e magari sensibili (che so il tuo ultimo stipendio, il tuo IBAN, l’indirizzo della tua amante … ) tu che lo ricevi dovresti avere in piedi un processo di gestione che tenga presente che questi dati devono:
Che tu ci creda o meno questo richiede di avere dei processi definiti che riguardano la “vita” di quel coso che so, adesso, incominci ad odiare.
Insomma dovresti sapere cose del tipo (tra loro strettamente correlate):
per quanto tempo tengo questa cosa nei miei sistemi?
Come salvo questi dati?
Come li cancello?
Sembra facile ma tu sai veramente che succede ai CV che ricevi?
Traduco, hai una regola standard che definisce quanto tempo puoi tenere questi dati? Mesi? Anni? Lustri? Per sempre? Ma che @#?§ vuoi che me ne freghi ammé?
Ok la ultima e la tua policy attuale lo so, ma temo non sia la risposta che meglio si adatta alla nostra legislazione.
Quanto tengo quell’oggetto ed i relativi dati in casa è importante perché:
Ora il punto uno è già un punto dolente. Significa che tu dovresti sapere questa roba dove si trova nei tuoi sistemi. E non importa se in forma cartacea o elettronica….
Il punto è dolente anche perché ti impone di utilizzare tecniche coerenti per la protezione, salvataggio, recupero ed accesso al dato.
Insomma vediamo se riesco a spiegartelo: se lo salvi in un database o lo metti su una cartella, devi garantire in qualche maniera che l’accesso non sia concesso proprio a chiunque, anche all’interno della azienda.
Se poi ha iniziato a girare via email so che può essere complicato evitare che vada ovunque quindi, magari, sarebbe opportuno che queste cose le sappiano tutti in azienda, non solo lo sfigato di turno che deve farsi carico di sta roba pallosa che è la privacy.
Insomma di a chi lo ha ricevuto che va trattato in maniera adeguata, magari cancellandolo se non serve più, altrimenti come garantisci la adeguata protezione ed il ciclo di vita?
Poi, ovviamente, l’IT dovrebbe garantire la protezione anche da intrusioni esterne:
qualcuno la chiama cyber security,
altri sicurezza informatica,
tu “quelle cose li da tecnici che non ci capisco niente però ho l’antivirus che non aggiorno da 6 mesi perché mi rallenta il computer”
tu “quelle cose li da tecnici che non ci capisco niente però ho l’antivirus che non aggiorno da 6 mesi perché mi rallenta il computer”
In teoria dovresti anche avere sistemi di salvataggio e recupero adeguati. Una roba che si chiama backup e recovery, magari ne hai sentito parlare l’ultima volta che hai perso tutti i tuoi dati …
Il tutto perché se non lo fai e, disgraziatamente, ti becchi un ransomware, o ti entrano nei sistemi e finisci sui giornali perché hanno pubblicato la foto dell’amante che il tuo candidato che aveva messo sul cv, qualcuno che ti ha mandato il cv potrebbe porsi domande e chiedere conto delle tue azioni, e sai la cosa brutta quale è? … che non è tutta colpa dell’IT (fonte di tutti i mali, notoriamente) secondo la legge …
Lo odi sempre più sto cv vero?
Pensa che la cosa è ancora più complicata perché: si qualcuno si deve far carico dei backup, testare di tanto in tanto i restore.
Roba che il buon Monguzzi non finisce mai di ricordarci, ma che puntualmente ignoriamo J
Ma lasciami aggiungere un altro pezzettino. Se il dato lo cancelli deve essere cancellato davvero. Questo significa la cancellazione di tutte le copie presenti in azienda:
Lo sapevi? No?
Se non lo sai sallo!
So di essere controcorrente scrivendo queste cose, e che hai cose molto più importanti a cui pensare. Ma se volessi potrei continuare parlando al tuo marketing, al tuo ufficio vendite, al tuo ufficio acquisti, a chi ti gestisce il sito web e probabilmente anche al tuo IT manager che se gli si dice GDPR risponde a te e tua sorella!
probabilmente anche al tuo IT manager che se gli si dice GDPR risponde a te e tua sorella!
Il punto è che queste cose sembrano complicate, ma in realtà non lo sono davvero. Basterebbe capire cosa significa integrare i dati nei processi aziendali, e disegnare i medesimi tenendo conto delle esigenze di legge, di business e della tecnologia corrente.
Certo significa anche che non puoi trattare la privacy come una rottura di scatole che non ti riguarda, esattamente come non dovresti fare con l’IT.
Pensaci e se eviterai una multa forse mi ringrazierai anche, finita la sequela di improperi che mi sono meritato per averti detto queste cose.Ciao
Sono un poco preoccupato, perché la mia impressione è che in Italia, a fronte di una delle legislazioni più severe d’Europa e i nuovi vincoli introdotti od in via di introduzione dal GDPR, il concetto di privacy sia altamente sottovalutato.
Il problema ovviamente è insito nella storica sottovalutazione italica dell’impatto delle strutture informatiche all’interno dei processi produttivi, decisionali e manageriali.
Insomma non ci si interessa, non si capisce, e non si valuta. Di conseguenza non si correggono comportamenti errati e, allo stesso tempo, non si sfruttano le nuove possibilità rimanendo al palo delle nuove tecnologie con buona pace di chi (da Olivetti a Faggin, ma potremmo citare Marconi e Meucci) avevano fatto dell’Italia la piattaforma del nuovo.
Vabbè
Polemiche parte vediamo di capire con un esempio così semplice che persino un ufficio HR potrebbe capire, cosa significa gestire la privacy e la protezione del dato.
Immaginiamo che il vostro ufficio HR riceva un CV di un possibile candidato. Cosa non tanto strana in tempi in cui la ricerca del lavoro è fondamentale (anche io ne ho mandati in giro centinaia recentemente).
Immaginiamo anche che il CV arrivi via posta elettronica (cosa abbastanza consueta) e che giusto perché ci sono posizioni aperte il medesimo venga fatto girare tra qualche potenziale interessato. Ad esempio l’hiring manager.
Non mi interessa in questo momento sapere come finirà la storia dell’essere umano dietro quel pezzo di carta, con i suoi bisogni, aspirazioni e potenziale. Mi interessa proprio il pezzo di carta, virtuale.
Come potete immaginare quel pezzo di carta contiene dati personali, in quanto sono riferibili ad una persona fisica.
OPS va a finire che per questo devo trattarli in maniera coerente alle disposizioni di legge? Va a finire che il GDPR (qualunque cosa esso sia) viene convolto?
Temo proprio di si.
Allora in teoria, ammesso e non concesso che tu sia interessato in qualche maniera ad essere allineato ai dettami di legge, dovresti processare questi dati di conseguenza.
Non voglio qui fare una dissertazione di dettaglio sul GDPR, ma mi limito ad alcune considerazioni banalotte, giusto per aiutarti ad evitare una multa salta.
Il cv in questione probabilmente finirà in:
Ora siccome quel pezzo di carta (virtuale e non) contiene dati personali, e magari sensibili (che so il tuo ultimo stipendio, il tuo IBAN, l’indirizzo della tua amante … ) tu che lo ricevi dovresti avere in piedi un processo di gestione che tenga presente che questi dati devono:
Che tu ci creda o meno questo richiede di avere dei processi definiti che riguardano la “vita” di quel coso che so, adesso, incominci ad odiare.
Insomma dovresti sapere cose del tipo (tra loro strettamente correlate):
per quanto tempo tengo questa cosa nei miei sistemi?
Come salvo questi dati?
Come li cancello?
Sembra facile ma tu sai veramente che succede ai CV che ricevi?
Traduco, hai una regola standard che definisce quanto tempo puoi tenere questi dati? Mesi? Anni? Lustri? Per sempre? Ma che @#?§ vuoi che me ne freghi ammé?
Ok la ultima e la tua policy attuale lo so, ma temo non sia la risposta che meglio si adatta alla nostra legislazione.
Quanto tengo quell’oggetto ed i relativi dati in casa è importante perché:
Ora il punto uno è già un punto dolente. Significa che tu dovresti sapere questa roba dove si trova nei tuoi sistemi. E non importa se in forma cartacea o elettronica….
Il punto è dolente anche perché ti impone di utilizzare tecniche coerenti per la protezione, salvataggio, recupero ed accesso al dato.
Insomma vediamo se riesco a spiegartelo: se lo salvi in un database o lo metti su una cartella, devi garantire in qualche maniera che l’accesso non sia concesso proprio a chiunque, anche all’interno della azienda.
Se poi ha iniziato a girare via email so che può essere complicato evitare che vada ovunque quindi, magari, sarebbe opportuno che queste cose le sappiano tutti in azienda, non solo lo sfigato di turno che deve farsi carico di sta roba pallosa che è la privacy.
Insomma di a chi lo ha ricevuto che va trattato in maniera adeguata, magari cancellandolo se non serve più, altrimenti come garantisci la adeguata protezione ed il ciclo di vita?
Poi, ovviamente, l’IT dovrebbe garantire la protezione anche da intrusioni esterne:
qualcuno la chiama cyber security,
altri sicurezza informatica,
tu “quelle cose li da tecnici che non ci capisco niente però ho l’antivirus che non aggiorno da 6 mesi perché mi rallenta il computer”
tu “quelle cose li da tecnici che non ci capisco niente però ho l’antivirus che non aggiorno da 6 mesi perché mi rallenta il computer”
In teoria dovresti anche avere sistemi di salvataggio e recupero adeguati. Una roba che si chiama backup e recovery, magari ne hai sentito parlare l’ultima volta che hai perso tutti i tuoi dati …
Il tutto perché se non lo fai e, disgraziatamente, ti becchi un ransomware, o ti entrano nei sistemi e finisci sui giornali perché hanno pubblicato la foto dell’amante che il tuo candidato che aveva messo sul cv, qualcuno che ti ha mandato il cv potrebbe porsi domande e chiedere conto delle tue azioni, e sai la cosa brutta quale è? … che non è tutta colpa dell’IT (fonte di tutti i mali, notoriamente) secondo la legge …
Lo odi sempre più sto cv vero?
Pensa che la cosa è ancora più complicata perché: si qualcuno si deve far carico dei backup, testare di tanto in tanto i restore.
Roba che il buon Monguzzi non finisce mai di ricordarci, ma che puntualmente ignoriamo J
Ma lasciami aggiungere un altro pezzettino. Se il dato lo cancelli deve essere cancellato davvero. Questo significa la cancellazione di tutte le copie presenti in azienda:
Lo sapevi? No?
Se non lo sai sallo!
So di essere controcorrente scrivendo queste cose, e che hai cose molto più importanti a cui pensare. Ma se volessi potrei continuare parlando al tuo marketing, al tuo ufficio vendite, al tuo ufficio acquisti, a chi ti gestisce il sito web e probabilmente anche al tuo IT manager che se gli si dice GDPR risponde a te e tua sorella!
probabilmente anche al tuo IT manager che se gli si dice GDPR risponde a te e tua sorella!
Il punto è che queste cose sembrano complicate, ma in realtà non lo sono davvero. Basterebbe capire cosa significa integrare i dati nei processi aziendali, e disegnare i medesimi tenendo conto delle esigenze di legge, di business e della tecnologia corrente.
Certo significa anche che non puoi trattare la privacy come una rottura di scatole che non ti riguarda, esattamente come non dovresti fare con l’IT.
Pensaci e se eviterai una multa forse mi ringrazierai anche, finita la sequela di improperi che mi sono meritato per averti detto queste cose.Ciao
Pretty Good Privacy or PGP is a popular program used to encrypt and decrypt email over the Internet, as well as authenticate messages with digital signatures and encrypted stored files.
Previously available as freeware and now only available as a low-cost commercial version, PGP was once the most widely used privacy-ensuring program by individuals and is also used by many corporations. It was developed by Philip R. Zimmermann in 1991 and has become a de facto standard for email security.
Pretty Good Privacy uses a variation of the public key system. In this system, each user has an encryption key that is publicly known and a private key that is known only to that user. You encrypt a message you send to someone else using their public key. When they receive it, they decrypt it using their private key. Since encrypting an entire message can be time-consuming, PGP uses a faster encryption algorithm to encrypt the message and then uses the public key to encrypt the shorter key that was used to encrypt the entire message. Both the encrypted message and the short key are sent to the receiver who first uses the receiver’s private key to decrypt the short key and then uses that key to decrypt the message.
PGP comes in two public key versions — Rivest-Shamir-Adleman (RSA) and Diffie-Hellman. The RSA version, for which PGP must pay a license fee to RSA, uses the IDEA algorithm to generate a short key for the entire message and RSA to encrypt the short key. The Diffie-Hellman version uses the CAST algorithm for the short key to encrypt the message and the Diffie-Hellman algorithm to encrypt the short key.
When sending digital signatures, PGP uses an efficient algorithm that generates a hash (a mathematical summary) from the user’s name and other signature information. This hash code is then encrypted with the sender’s private key. The receiver uses the sender’s public key to decrypt the hash code. If it matches the hash code sent as the digital signature for the message, the receiver is sure that the message has arrived securely from the stated sender. PGP’s RSA version uses the MD5 algorithm to generate the hash code. PGP’s Diffie-Hellman version uses the SHA-1 algorithm to generate the hash code.
To use Pretty Good Privacy, download or purchase it and install it on your computer system. It typically contains a user interface that works with your customary email program. You may also need to register the public key that your PGP program gives you with a PGP public-key server so that people you exchange messages with will be able to find your public key.
PGP freeware is available for older versions of Windows, Mac, DOS, Unix and other operating systems. In 2010, Symantec Corp. acquired PGP Corp., which held the rights to the PGP code, and soon stopped offering a freeware version of the technology. The vendor currently offers PGP technology in a variety of its encryption products, such as Symantec Encryption Desktop, Symantec Desktop Email Encryption and Symantec Encryption Desktop Storage. Symantec also makes the Symantec Encryption Desktop source code available for peer review.
Though Symantec ended PGP freeware, there are other non-proprietary versions of the technology that are available. OpenPGP is an open source version of PGP that’s supported by the Internet Engineering Task Force (IETF). OpenPGP is used by several software vendors, including as Coviant Software, which offers a free tool for OpenPGP encryption, and HushMail, which offers a Web-based encrypted email service powered by OpenPGP. In addition, the Free Software Foundation developed GNU Privacy Guard (GPG), an OpenPGG-compliant encryption software.
Pretty Good Privacy can be used to authenticate digital certificates and encrypt/decrypt texts, emails, files, directories and whole disk partitions. Symantec, for example, offers PGP-based products such as Symantec File Share Encryption for encrypting files shared across a network and Symantec Endpoint Encryption for full disk encryption on desktops, mobile devices and removable storage. In the case of using PGP technology for files and drives instead of messages, the Symantec products allows users to decrypt and re-encrypt data via a single sign-on.
Originally, the U.S. government restricted the exportation of PGP technology and even launched a criminal investigation against Zimmermann for putting the technology in the public domain (the investigation was later dropped). Network Associates Inc. (NAI) acquired Zimmermann’s company, PGP Inc., in 1997 and was able to legally publish the source code (NAI later sold the PGP assets and IP to ex-PGP developers that joined together to form PGP Corp. in 2002, which was acquired by Symantec in 2010).
Today, PGP encrypted email can be exchanged with users outside the U.S if you have the correct versions of PGP at both ends.
There are several versions of PGP in use. Add-ons can be purchased that allow backwards compatibility for newer RSA versions with older versions. However, the Diffie-Hellman and RSA versions of PGP do not work with each other since they use different algorithms. There are also a number of technology companies that have released tools or services supporting PGP. Google this year introduced an OpenPGP email encryption plug-in for Chrome, while Yahoo also began offering PGP encryption for its email service.
Asymmetric algorithms (public key algorithms) use different keys for encryption and decryption, and the decryption key cannot (practically) be derived from the encryption key. Asymmetric algorithms are important because they can be used for transmitting encryption keys or other data securely even when the parties have no opportunity to agree on a secret key in private.
Types of Asymmetric algorithms
Types of Asymmetric algorithms (public key algorithms):
• RSA
• Diffie-Hellman
• Digital Signature Algorithm
• ElGamal
• ECDSA
• XTR
Asymmetric algorithms examples:
RSA Asymmetric algorithm
Rivest-Shamir-Adleman is the most commonly used asymmetric algorithm (public key algorithm). It can be used both for encryption and for digital signatures. The security of RSA is generally considered equivalent to factoring, although this has not been proved.
RSA computation occurs with integers modulo n = p * q, for two large secret primes p, q. To encrypt a message m, it is exponentiated with a small public exponent e. For decryption, the recipient of the ciphertext c = me (mod n) computes the multiplicative reverse d = e-1 (mod (p-1)*(q-1)) (we require that e is selected suitably for it to exist) and obtains cd = m e * d = m (mod n). The private key consists of n, p, q, e, d (where p and q can be omitted); the public key contains only n and e. The problem for the attacker is that computing the reverse d of e is assumed to be no easier than factorizing n.
The key size should be greater than 1024 bits for a reasonable level of security. Keys of size, say, 2048 bits should allow security for decades. There are actually multiple incarnations of this algorithm; RC5 is one of the most common in use, and RC6 was a finalist algorithm for AES.
Diffie-Hellman
Diffie-Hellman is the first asymmetric encryption algorithm, invented in 1976, using discrete logarithms in a finite field. Allows two users to exchange a secret key over an insecure medium without any prior secrets.
Diffie-Hellman (DH) is a widely used key exchange algorithm. In many cryptographical protocols, two parties wish to begin communicating. However, let’s assume they do not initially possess any common secret and thus cannot use secret key cryptosystems. The key exchange by Diffie-Hellman protocol remedies this situation by allowing the construction of a common secret key over an insecure communication channel. It is based on a problem related to discrete logarithms, namely the Diffie-Hellman problem. This problem is considered hard, and it is in some instances as hard as the discrete logarithm problem.
The Diffie-Hellman protocol is generally considered to be secure when an appropriate mathematical group is used. In particular, the generator element used in the exponentiations should have a large period (i.e. order). Usually, Diffie-Hellman is not implemented on hardware.
Digital Signature Algorithm
Digital Signature Algorithm (DSA) is a United States Federal Government standard or FIPS for digital signatures. It was proposed by the National Institute of Standards and Technology (NIST) in August 1991 for use in their Digital Signature Algorithm (DSA), specified in FIPS 186 [1], adopted in 1993. A minor revision was issued in 1996 as FIPS 186-1 [2], and the standard was expanded further in 2000 as FIPS 186-2 [3]. Digital Signature Algorithm (DSA) is similar to the one used by ElGamal signature algorithm. It is fairly efficient though not as efficient as RSA for signature verification. The standard defines DSS to use the SHA-1 hash function exclusively to compute message digests.
The main problem with DSA is the fixed subgroup size (the order of the generator element), which limits the security to around only 80 bits. Hardware attacks can be menacing to some implementations of DSS. However, it is widely used and accepted as a good algorithm.
ElGamal
The ElGamal is a public key cipher – an asymmetric key encryption algorithm for public-key cryptography which is based on the Diffie-Hellman key agreement. ElGamal is the predecessor of DSA.
ECDSA
Elliptic Curve DSA (ECDSA) is a variant of the Digital Signature Algorithm (DSA) which operates on elliptic curve groups. As with Elliptic Curve Cryptography in general, the bit size of the public key believed to be needed for ECDSA is about twice the size of the security level, in bits.
XTR
XTR is an algorithm for asymmetric encryption (public-key encryption). XTR is a novel method that makes use of traces to represent and calculate powers of elements of a subgroup of a finite field. It is based on the primitive underlying the very first public key cryptosystem, the Diffie-Hellman key agreement protocol.
From a security point of view, XTR security relies on the difficulty of solving discrete logarithm related problems in the multiplicative group of a finite field. Some advantages of XTR are its fast key generation (much faster than RSA), small key sizes (much smaller than RSA, comparable with ECC for current security settings), and speed (overall comparable with ECC for current security settings).
Symmetric and asymmetric algorithms
Symmetric algorithms encrypt and decrypt with the same key. Main advantages of symmetric algorithms are their security and high speed. Asymmetric algorithms encrypt and decrypt with different keys. Data is encrypted with a public key, and decrypted with a private key. Asymmetric algorithms (also known as public-key algorithms) need at least a 3,000-bit key to achieve the same level of security of a 128-bit symmetric algorithm. Asymmetric algorithms are incredibly slow and it is impractical to use them to encrypt large amounts of data. Generally, symmetric algorithms are much faster to execute on a computer than asymmetric ones. In practice they are often used together, so that a public-key algorithm is used to encrypt a randomly generated encryption key, and the random key is used to encrypt the actual message using a symmetric algorithm. This is sometimes called hybrid encryption
Pretty Good Privacy or PGP is a popular program used to encrypt and decrypt email over the Internet, as well as authenticate messages with digital signatures and encrypted stored files.
Previously available as freeware and now only available as a low-cost commercial version, PGP was once the most widely used privacy-ensuring program by individuals and is also used by many corporations. It was developed by Philip R. Zimmermann in 1991 and has become a de facto standard for email security.
Pretty Good Privacy uses a variation of the public key system. In this system, each user has an encryption key that is publicly known and a private key that is known only to that user. You encrypt a message you send to someone else using their public key. When they receive it, they decrypt it using their private key. Since encrypting an entire message can be time-consuming, PGP uses a faster encryption algorithm to encrypt the message and then uses the public key to encrypt the shorter key that was used to encrypt the entire message. Both the encrypted message and the short key are sent to the receiver who first uses the receiver’s private key to decrypt the short key and then uses that key to decrypt the message.
PGP comes in two public key versions — Rivest-Shamir-Adleman (RSA) and Diffie-Hellman. The RSA version, for which PGP must pay a license fee to RSA, uses the IDEA algorithm to generate a short key for the entire message and RSA to encrypt the short key. The Diffie-Hellman version uses the CAST algorithm for the short key to encrypt the message and the Diffie-Hellman algorithm to encrypt the short key.
When sending digital signatures, PGP uses an efficient algorithm that generates a hash (a mathematical summary) from the user’s name and other signature information. This hash code is then encrypted with the sender’s private key. The receiver uses the sender’s public key to decrypt the hash code. If it matches the hash code sent as the digital signature for the message, the receiver is sure that the message has arrived securely from the stated sender. PGP’s RSA version uses the MD5 algorithm to generate the hash code. PGP’s Diffie-Hellman version uses the SHA-1 algorithm to generate the hash code.
To use Pretty Good Privacy, download or purchase it and install it on your computer system. It typically contains a user interface that works with your customary email program. You may also need to register the public key that your PGP program gives you with a PGP public-key server so that people you exchange messages with will be able to find your public key.
PGP freeware is available for older versions of Windows, Mac, DOS, Unix and other operating systems. In 2010, Symantec Corp. acquired PGP Corp., which held the rights to the PGP code, and soon stopped offering a freeware version of the technology. The vendor currently offers PGP technology in a variety of its encryption products, such as Symantec Encryption Desktop, Symantec Desktop Email Encryption and Symantec Encryption Desktop Storage. Symantec also makes the Symantec Encryption Desktop source code available for peer review.
Though Symantec ended PGP freeware, there are other non-proprietary versions of the technology that are available. OpenPGP is an open source version of PGP that’s supported by the Internet Engineering Task Force (IETF). OpenPGP is used by several software vendors, including as Coviant Software, which offers a free tool for OpenPGP encryption, and HushMail, which offers a Web-based encrypted email service powered by OpenPGP. In addition, the Free Software Foundation developed GNU Privacy Guard (GPG), an OpenPGG-compliant encryption software.
Pretty Good Privacy can be used to authenticate digital certificates and encrypt/decrypt texts, emails, files, directories and whole disk partitions. Symantec, for example, offers PGP-based products such as Symantec File Share Encryption for encrypting files shared across a network and Symantec Endpoint Encryption for full disk encryption on desktops, mobile devices and removable storage. In the case of using PGP technology for files and drives instead of messages, the Symantec products allows users to decrypt and re-encrypt data via a single sign-on.
Originally, the U.S. government restricted the exportation of PGP technology and even launched a criminal investigation against Zimmermann for putting the technology in the public domain (the investigation was later dropped). Network Associates Inc. (NAI) acquired Zimmermann’s company, PGP Inc., in 1997 and was able to legally publish the source code (NAI later sold the PGP assets and IP to ex-PGP developers that joined together to form PGP Corp. in 2002, which was acquired by Symantec in 2010).
Today, PGP encrypted email can be exchanged with users outside the U.S if you have the correct versions of PGP at both ends.
There are several versions of PGP in use. Add-ons can be purchased that allow backwards compatibility for newer RSA versions with older versions. However, the Diffie-Hellman and RSA versions of PGP do not work with each other since they use different algorithms. There are also a number of technology companies that have released tools or services supporting PGP. Google this year introduced an OpenPGP email encryption plug-in for Chrome, while Yahoo also began offering PGP encryption for its email service.
Asymmetric algorithms (public key algorithms) use different keys for encryption and decryption, and the decryption key cannot (practically) be derived from the encryption key. Asymmetric algorithms are important because they can be used for transmitting encryption keys or other data securely even when the parties have no opportunity to agree on a secret key in private.
Types of Asymmetric algorithms
Types of Asymmetric algorithms (public key algorithms):
• RSA
• Diffie-Hellman
• Digital Signature Algorithm
• ElGamal
• ECDSA
• XTR
Asymmetric algorithms examples:
RSA Asymmetric algorithm
Rivest-Shamir-Adleman is the most commonly used asymmetric algorithm (public key algorithm). It can be used both for encryption and for digital signatures. The security of RSA is generally considered equivalent to factoring, although this has not been proved.
RSA computation occurs with integers modulo n = p * q, for two large secret primes p, q. To encrypt a message m, it is exponentiated with a small public exponent e. For decryption, the recipient of the ciphertext c = me (mod n) computes the multiplicative reverse d = e-1 (mod (p-1)*(q-1)) (we require that e is selected suitably for it to exist) and obtains cd = m e * d = m (mod n). The private key consists of n, p, q, e, d (where p and q can be omitted); the public key contains only n and e. The problem for the attacker is that computing the reverse d of e is assumed to be no easier than factorizing n.
The key size should be greater than 1024 bits for a reasonable level of security. Keys of size, say, 2048 bits should allow security for decades. There are actually multiple incarnations of this algorithm; RC5 is one of the most common in use, and RC6 was a finalist algorithm for AES.
Diffie-Hellman
Diffie-Hellman is the first asymmetric encryption algorithm, invented in 1976, using discrete logarithms in a finite field. Allows two users to exchange a secret key over an insecure medium without any prior secrets.
Diffie-Hellman (DH) is a widely used key exchange algorithm. In many cryptographical protocols, two parties wish to begin communicating. However, let’s assume they do not initially possess any common secret and thus cannot use secret key cryptosystems. The key exchange by Diffie-Hellman protocol remedies this situation by allowing the construction of a common secret key over an insecure communication channel. It is based on a problem related to discrete logarithms, namely the Diffie-Hellman problem. This problem is considered hard, and it is in some instances as hard as the discrete logarithm problem.
The Diffie-Hellman protocol is generally considered to be secure when an appropriate mathematical group is used. In particular, the generator element used in the exponentiations should have a large period (i.e. order). Usually, Diffie-Hellman is not implemented on hardware.
Digital Signature Algorithm
Digital Signature Algorithm (DSA) is a United States Federal Government standard or FIPS for digital signatures. It was proposed by the National Institute of Standards and Technology (NIST) in August 1991 for use in their Digital Signature Algorithm (DSA), specified in FIPS 186 [1], adopted in 1993. A minor revision was issued in 1996 as FIPS 186-1 [2], and the standard was expanded further in 2000 as FIPS 186-2 [3]. Digital Signature Algorithm (DSA) is similar to the one used by ElGamal signature algorithm. It is fairly efficient though not as efficient as RSA for signature verification. The standard defines DSS to use the SHA-1 hash function exclusively to compute message digests.
The main problem with DSA is the fixed subgroup size (the order of the generator element), which limits the security to around only 80 bits. Hardware attacks can be menacing to some implementations of DSS. However, it is widely used and accepted as a good algorithm.
ElGamal
The ElGamal is a public key cipher – an asymmetric key encryption algorithm for public-key cryptography which is based on the Diffie-Hellman key agreement. ElGamal is the predecessor of DSA.
ECDSA
Elliptic Curve DSA (ECDSA) is a variant of the Digital Signature Algorithm (DSA) which operates on elliptic curve groups. As with Elliptic Curve Cryptography in general, the bit size of the public key believed to be needed for ECDSA is about twice the size of the security level, in bits.
XTR
XTR is an algorithm for asymmetric encryption (public-key encryption). XTR is a novel method that makes use of traces to represent and calculate powers of elements of a subgroup of a finite field. It is based on the primitive underlying the very first public key cryptosystem, the Diffie-Hellman key agreement protocol.
From a security point of view, XTR security relies on the difficulty of solving discrete logarithm related problems in the multiplicative group of a finite field. Some advantages of XTR are its fast key generation (much faster than RSA), small key sizes (much smaller than RSA, comparable with ECC for current security settings), and speed (overall comparable with ECC for current security settings).
Symmetric and asymmetric algorithms
Symmetric algorithms encrypt and decrypt with the same key. Main advantages of symmetric algorithms are their security and high speed. Asymmetric algorithms encrypt and decrypt with different keys. Data is encrypted with a public key, and decrypted with a private key. Asymmetric algorithms (also known as public-key algorithms) need at least a 3,000-bit key to achieve the same level of security of a 128-bit symmetric algorithm. Asymmetric algorithms are incredibly slow and it is impractical to use them to encrypt large amounts of data. Generally, symmetric algorithms are much faster to execute on a computer than asymmetric ones. In practice they are often used together, so that a public-key algorithm is used to encrypt a randomly generated encryption key, and the random key is used to encrypt the actual message using a symmetric algorithm. This is sometimes called hybrid encryption