15 Oct 2011

What cloud computing really means The next big trend sounds nebulous, but it's not so fuzzy when you view the value proposition from the perspective of IT professionals


Cloud computing is all the rage. "It's become the phrase du jour," says Gartner senior analyst Ben Pring, echoing many of his peers. The problem is that (as with Web 2.0) everyone seems to have a different definition.
As a metaphor for the Internet, "the cloud" is a familiar cliché, but when combined with "computing," the meaning gets bigger and fuzzier. Some analysts and vendors define cloud computing narrowly as an updated version of utility computing: basically virtual servers available over the Internet. Others go very broad, arguing anything you consume outside the firewall is "in the cloud," including conventional outsourcing.
[ Get the no-nonsense explanations and advice you need to take real advantage of cloud computing in InfoWorld editors' 21-page Cloud Computing Deep Dive PDF special report, then go deeper in our Private Cloud Deep Dive. | Stay up on the cloud with InfoWorld's Cloud Computing Report newsletter. ]
Cloud computing comes into focus only when you think about what IT always needs: a way to increase capacity or add capabilities on the fly without investing in new infrastructure, training new personnel, or licensing new software. Cloud computing encompasses any subscription-based or pay-per-use service that, in real time over the Internet, extends IT's existing capabilities.
Cloud computing is at an early stage, with a motley crew of providers large and small delivering a slew of cloud-based services, from full-blown applications to storage services to spam filtering. Yes, utility-style infrastructure providers are part of the mix, but so are SaaS (software as a service) providers such as Salesforce.com. Today, for the most part, IT must plug into cloud-based services individually, but cloud computing aggregators and integrators are already emerging.
InfoWorld talked to dozens of vendors, analysts, and IT customers to tease out the various components of cloud computing. Based on those discussions, here's a rough breakdown of what cloud computing is all about:
1. SaaSThis type of cloud computing delivers a single application through the browser to thousands of customers using a multitenant architecture. On the customer side, it means no upfront investment in servers or software licensing; on the provider side, with just one app to maintain, costs are low compared to conventional hosting. Salesforce.com is by far the best-known example among enterprise applications, but SaaS is also common for HR apps and has even worked its way up the food chain to ERP, with players such as Workday. And who could have predicted the sudden rise of SaaS "desktop" applications, such as Google Apps and Zoho Office?
2. Utility computingThe idea is not new, but this form of cloud computing is getting new life from Amazon.com, Sun, IBM, and others who now offer storage and virtual servers that IT can access on demand. Early enterprise adopters mainly use utility computing for supplemental, non-mission-critical needs, but one day, they may replace parts of the datacenter. Other providers offer solutions that help IT create virtual datacenters from commodity servers, such as 3Tera's AppLogic and Cohesive Flexible Technologies' Elastic Server on Demand. Liquid Computing's LiquidQ offers similar capabilities, enabling IT to stitch together memory, I/O, storage, and computational capacity as a virtualized resource pool available over the network.
3. Web services in the cloudClosely related to SaaS, Web service providers offer APIs that enable developers to exploit functionality over the Internet, rather than delivering full-blown applications. They range from providers offering discrete business services -- such as Strike Iron and Xignite -- to the full range of APIs offered by Google Maps, ADP payroll processing, the U.S. Postal Service, Bloomberg, and even conventional credit card processing services.
4. Platform as a serviceAnother SaaS variation, this form of cloud computing delivers development environments as a service. You build your own applications that run on the provider's infrastructure and are delivered to your users via the Internet from the provider's servers. Like Legos, these services are constrained by the vendor's design and capabilities, so you don't get complete freedom, but you do get predictability and pre-integration. Prime examples include Salesforce.com's Force.com,Coghead and the new Google App Engine. For extremely lightweight development, cloud-basedmashup platforms abound, such as Yahoo Pipes or Dapper.net.
5. MSP (managed service providers)One of the oldest forms of cloud computing, a managed service is basically an application exposed to IT rather than to end-users, such as a virus scanning service for e-mail or an application monitoring service (which Mercury, among others, provides). Managed security services delivered by SecureWorks, IBM, and Verizon fall into this category, as do such cloud-based anti-spam services as Postini, recently acquired by Google. Other offerings include desktop management services, such as those offered by CenterBeam or Everdream.
6. Service commerce platformsA hybrid of SaaS and MSP, this cloud computing service offers a service hub that users interact with. They're most common in trading environments, such as expense management systems that allow users to order travel or secretarial services from a common platform that then coordinates the service delivery and pricing within the specifications set by the user. Think of it as an automated service bureau. Well-known examples include Rearden Commerce and Ariba.
7. Internet integrationThe integration of cloud-based services is in its early days. OpSource, which mainly concerns itself with serving SaaS providers, recently introduced the OpSource Services Bus, which employs in-the-cloud integration technology from a little startup called Boomi. SaaS provider Workday recently acquired another player in this space, CapeClear, an ESB (enterprise service bus) provider that was edging toward b-to-b integration. Way ahead of its time, Grand Central -- which wanted to be a universal "bus in the cloud" to connect SaaS providers and provide integrated solutions to customers -- flamed out in 2005.
Today, with such cloud-based interconnection seldom in evidence, cloud computing might be more accurately described as "sky computing," with many isolated clouds of services which IT customers must plug into individually. On the other hand, as virtualization and SOA permeate the enterprise, the idea of loosely coupled services running on an agile, scalable infrastructure should eventually make every enterprise a node in the cloud. It's a long-running trend with a far-out horizon. But among big metatrends, cloud computing is the hardest one to argue with in the long term.

6 Oct 2011

Meet Aakash, India’s Rs 2,250 tablet




DataWind has launched an Android Froyo based tablet – Aakash – will be priced at Rs 2,250 under a government scheme that will distribute 100,000 units in a field trial. The price will come down to Rs 1,750 once the government places an order of 10 million units under a scheme to to deliver this tablet to post-secondary students across the country. The Aakash features a 7.0-inch resistive display, 366MHz processor, 256MB RAM, two USB ports, Wi-Fi and 3 hours of battery life. For the mass production of this tablet, DataWind has set up a manufacturing facility in Hyderabad, which can manufacture 700 units daily. For the general audience, DataWind will launch the UbiSlate for Rs 2,999 by November. DataWind is currently talking to carriers to bundle a Rs 99 data plan for 2GB of monthly access for 12 months. Our hands-on and first impressions of the Aakash will be up soon.
Tags: 

Will Microsoft launch Windows Phone 7.5 in India on October 12?



A teaser ‘block your date’ invite from Microsoft has landed in our inbox telling us to block our dates on October 12. The invite says “Imagine if everything important to you was right there – friends, family and work.” The artwork is unmistakably inspired by Microsoft’s Metro UI, complete with the arrow mark. So could it mean Microsoft is finally ready to launch Windows Phone 7.5 in India? Mind you, it had never officially launched Windows Phone 7 as many features including the Marketplace was not available here. We are quite confident that the launch will happen as Samsung has already ‘launched’ the Omnia W (though it will be available in stores in November), just to get dibs on ‘India’s first Windows Phone 7.5 smartphone’ tag. We will be reporting live from the press conference, so stay tuned.
Tags: 

Samsung posts Nexus Prime and Android 4.0 teaser clip [video]


Samsung just cranked our anticipation meters up to 11. The phone maker on Wednesday posted a teaser video for the Nexus Prime, which is expected to be unveiled during Samsung’s Unpacked event in San Diego on Tuesday, October 11th, alongside Android 4.0 (Ice Cream Sandwich). The video does not reveal much about the phone or Ice Cream Sandwich, but we do see the Nexus Prime appears to be incredibly thin and it will sport a curved glass display similar to the one on the Nexus S. Additionally, there are three small metal dots on the side, which suggests it may support one or more dock accessories. Ice Cream Sandwich is expected to run on both tablets and smartphones, merging Honeycomb and Gingerbread features into a single Android build. BGR will be reporting live from the event, where we will hear all of the details on the new operating system, the Nexus Prime hardware and more. Samsung’s full teaser video follows below.

This post originally appeared on BGR: The Three Biggest Letters In Tech.com

Facebook: Apple went silent over the weekend, announcements curtailed


Apparently, Tim Cook & Co indeed had some more stuff to announce at Apple’s iPhone 4S launch event earlier this week, including a Facebook app for the iPad. But Apple just “went dark” over the weekend, if Robert Scoble’s source at Facebook is to be believed. Apple’s leadership knew the sad news that could come their way anytime, and keeping that in mind, in hindsight we believe Tim Cook and his team did a great job. Below is an excerpt of what Scoble posted on Google+ moments ago.
Today a guy I know at Facebook told me that Apple just “went dark” this weekend and stopped answering emails and phone calls (they had amazing new iPhone and iPad apps and a new developer platform all ready for announcing). Folks inside Facebook thought they had done something massively wrong. No, they hadn’t. Truth is you had something deeper to deal with.

All the must-read articles, must-see videos about Steve Jobs from across the Internet [update]



Even as I hold back tears, the frequent sobs and the occasional choke, all I have been doing since I woke up to Steve Jobs’ news is surfing the web, checking my Twitter, Google+, Reader and Facebook for anything beyond the preemptive obituary every media publication already had prepared and constantly updated for years. I was looking for features that shared some experiences a reporter had with Jobs, some insights from people who have seen him up and getting caught up in his ‘reality distortion field’ or just have something that is not a boring obituary. I’m sure there are a lot of you who are looking for similar stuff. So read on for my compilation of features you shouldn’t miss, which I will keep updating as and when I see something new and worthwhile. Do ping me if you find anything interesting and I will add them to this curated list.
UPDATE: Thanks Ab for pointing out the The Wirecutter post. Added it to the list. Keep ‘em coming, fellas.
Read on…
The Steve Jobs I knew – Walt Mossberg, WSJ
Mossberg shares his personal meetings with Jobs, including one when Jobs invited him to his house after his liver transplant.
A front row seat to Steve Jobs’ career – Robert Scoble, The Next Web
This piece first appeared when Steve Jobs handed over the reigns to Tim Cook earlier this year. Scoble tells the world why he chose to take a front row seat for Jobs’ iPad 2 keynote address building it up to his own association with Apple, an insight on Jobs and Apple after Jobs.
Steve Jobs: Imitated, never duplicated – David Pogue, The New York TImes
Pogue tells us why we failed to see Jobs’ vision when he removed features (floppy drives, ethernet ports, Flash…) while his competitors were busy adding new stuff. The headline says the rest.
The Tao of Steve – Om Malik, GigaOm
Rarely does Om Malik remove his journalist hat and write emotionally about people he covers. Read this story of how Steve Jobs became his Elvis.
Steve Jobs greatest achievements – Michael Calore, Wired
More of a resource than thought provoking or a teary eyed farewell, this one is simply a slideshow for Steve Jobs’ achievements through his products. A handy resource for a quick trip down the memory lane. Many of you might even have most of the gadgets that make this list.
Hear from the guy whose team (Gizmodo) stole Apple’s thunder by showing off a lost iPhone 4 to the world. Must read to see how Steve Jobs reacted to the incident and how Lam now thinks it wasn’t the best thing to do what they did.
Steve Jobs and the reserved seat – Alex Heath, CultofMac
Sentimental stuff, about how a reserved seat in the front row at Tuesday’s Let’s Talk iPhone event could have been for Jobs. Go there for the photograph.
Stumbled upon this site, PINE APPLE – Pining for Apple, which has Apple videos from the eighties as well as magazine covers where Steve Jobs has been featured.
I’m assuming you have heard Steve Jobs’ greatest speech ever, but here’s the video from his Stanford commencement address of 2005, just in case.
Another video doing the rounds today is an unreleased ‘Think Different’ ad campaign with Steve Jobs narrating the script.
And of course, Apple first Macintosh ad from 1984.

28 Sept 2011

cloud computing

Cloud computing is a general term for anything that involves delivering hosted services over the Internet. These services are broadly divided into three categories: Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS) and Software-as-a-Service (SaaS). The name cloud computing was inspired by the cloud symbol that's often used to represent the Internet in flowcharts and diagrams.
A cloud service has three distinct characteristics that differentiate it from traditional hosting. It is sold on demand, typically by the minute or the hour; it is elastic -- a user can have as much or as little of a service as they want at any given time; and the service is fully managed by the provider (the consumer needs nothing but a personal computer and Internet access). Significant innovations in virtualization and distributed computing, as well as improved access to high-speed Internet and a weak economy, have accelerated interest in cloud computing.
A cloud can be private or public. A public cloud sells services to anyone on the Internet. (Currently, Amazon Web Services is the largest public cloud provider.) A private cloud is a proprietary network or a data center that supplies hosted services to a limited number of people. When a service provider uses public cloud resources to create their private cloud, the result is called a virtual private cloud. Private or public, the goal of cloud computing is to provide easy, scalable access to computing resources and IT services.

Infrastructure-as-a-Service like Amazon Web Services provides virtual server instanceAPI) to start, stop, access and configure their virtual servers and storage. In the enterprise, cloud computing allows a company to pay for only as much capacity as is needed, and bring more online as soon as required. Because this pay-for-what-you-use model resembles the way electricity, fuel and water are consumed, it's sometimes referred to as utility computing.

Platform-as-a-service in the cloud is defined as a set of software and product development tools hosted on the provider's infrastructure. Developers create applications on the provider's platform over the Internet. PaaS providers may use APIs, website portals or gateway software installed on the customer's computer. Force.com, (an outgrowth of Salesforce.com) and GoogleApps are examples of PaaS. Developers need to know that currently, there are not standards for interoperability or data portability in the cloud. Some providers will not allow software created by their customers to be moved off the provider's platform.

In the software-as-a-service cloud model, the vendor supplies the hardware infrastructure, the software product and interacts with the user through a front-end portal. SaaS is a very broad market. Services can be anything from Web-based email to inventory control and database processing. Because the service provider hosts both the application and the data, the end user is free to use the service from anywhere.

See also: hybrid cloud, cloud backup

supply chain management (SCM)

Supply chain management (SCM) is the oversight of materials, information, and finances as they move in a process from supplier to manufacturer to wholesaler to retailer to consumer. Supply chain management involves coordinating and integrating these flows both within and among companies. It is said that the ultimate goal of any effective supply chain management system is to reduce inventory (with the assumption that products are available when needed). As a solution for successful supply chain management, sophisticated software systems with Web interfaces are competing with Web-based application service providers (ASP) who promise to provide part or all of the SCM service for companies who rent their service.
Supply chain management flows can be divided into three main flows:
  • The product flow
  • The information flow
  • The finances flow
The product flow includes the movement of goods from a supplier to a customer, as well as any customer returns or service needs. The information flow involves transmitting orders and updating the status of delivery. The financial flow consists of credit terms, payment schedules, and consignment and title ownership arrangements.
There are two main types of SCM software: planning applications and execution applications. Planning applications use advanced algorithms to determine the best way to fill an order. Execution applications track the physical status of goods, the management of materials, and financial information involving all parties.
Some SCM applications are based on open data models that support the sharing of data both inside and outside the enterprise (this is called the extended enterprise, and includes key suppliers, manufacturers, and end customers of a specific company). This shared data may reside in diverse database systems, or data warehouses, at several different sites and companies.
By sharing this data "upstream" (with a company's suppliers) and "downstream" (with a company's clients), SCM applications have the potential to improve the time-to-market of products, reduce costs, and allow all parties in the supply chain to better manage current resources and plan for future needs.
Increasing numbers of companies are turning to Web sites and Web-based applications as part of the SCM solution. A number of major Web sites offer e-procurement marketplaces where manufacturers can trade and even make auction bids with suppliers.

TCP/IP (Transmission Control Protocol/Internet Protocol)

TCP/IP (Transmission Control Protocol/Internet Protocol) is the basic communication language or protocol of the Internet. It can also be used as a communications protocol in a private network (either an intranet or an extranet). When you are set up with direct access to the Internet, your computer is provided with a copy of the TCP/IP program just as every other computer that you may send messages to or get information from also has a copy of TCP/IP.
TCP/IP is a two-layer program. The higher layer, Transmission Control Protocol, manages the assembling of a message or file into smaller packets that are transmitted over the Internet and received by a TCP layer that reassembles the packets into the original message. The lower layer, Internet Protocol, handles the address part of each packet so that it gets to the right destination. Each gateway computer on the network checks this address to see where to forward the message. Even though some packets from the same message are routed differently than others, they'll be reassembled at the destination.
TCP/IP uses the client/server model of communication in which a computer user (a client) requests and is provided a service (such as sending a Web page) by another computer (a server) in the network. TCP/IP communication is primarily point-to-point, meaning each communication is from one point (or host computer) in the network to another point or host computer. TCP/IP and the higher-level applications that use it are collectively said to be "stateless" because each client request is considered a new request unrelated to any previous one (unlike ordinary phone conversations that require a dedicated connection for the call duration). Being stateless frees network paths so that everyone can use them continuously. (Note that the TCP layer itself is not stateless as far as any one message is concerned. Its connection remains in place until all packets in a message have been received.)
Many Internet users are familiar with the even higher layer application protocols that use TCP/IP to get to the Internet. These include the World Wide Web's Hypertext Transfer Protocol (HTTP), the File Transfer Protocol (FTP), Telnet (Telnet) which lets you logon to remote computers, and the Simple Mail Transfer Protocol (SMTP). These and other protocols are often packaged together with TCP/IP as a "suite."
Personal computer users with an analog phone modem connection to the Internet usually get to the Internet through the Serial Line Internet Protocol (SLIP) or the Point-to-Point Protocol (PPP). These protocols encapsulate the IP packets so that they can be sent over the dial-up phone connection to an access provider's modem.
Protocols related to TCP/IP include the User Datagram Protocol (UDP), which is used instead of TCP for special purposes. Other protocols are used by network host computers for exchanging router information. These include the Internet Control Message Protocol (ICMP), the Interior Gateway Protocol (IGP), the Exterior Gateway Protocol (EGP), and the Border Gateway Protocol (BGP).

Kilo, mega, giga, tera, peta, and all that

Also see Kibi, mebi, gibi, tebi, pebi, and all that, which are relatively new prefixes designed to express power-of-two multiples.
Kilo, mega, giga, tera, and peta are among the list of prefixes that are used to denote the quantity of something, such as, in computing and telecommunications, a byte or a bit. Sometimes called prefix multipliers, these prefixes are also used in electronics and physics. Each multiplier consists of a one-letter abbreviation and the prefix that it stands for.
In communications, electronics, and physics, multipliers are defined in powers of 10 from 10-24 to 1024, proceeding in increments of three orders of magnitude (103 or 1,000). In IT and data storage, multipliers are defined in powers of 2 from 210 to 280, proceeding in increments of ten orders of magnitude (210 or 1,024). These multipliers are denoted in the following table.

Prefix Symbol(s) Power of 10 Power of 2
yocto- y 10-24 * --
zepto- z 10-21 * --
atto- a 10-18 * --
femto- f 10-15 * --
pico- p

Learn More


10-12 * --
nano- n 10-9 * --
micro- m 10-6 * --
milli- m 10-3 * --
centi- c 10-2 * --
deci- d 10-1 * --
(none) -- 100 20
deka- D 101 * --
hecto- h 102 * --
kilo- k or K ** 103 210
mega- M 106 220
giga- G 109 230
tera- T 1012 240
peta- P 1015 250
exa- E 1018 * 260
zetta- Z 1021 * 270
yotta- Y 1024 * 280
* Not generally used to express data speed
** k = 103 and K = 210
Examples of quantities or phenomena in which power-of-10 prefix multipliers apply include frequency (including computer clock speeds), physical mass, power, energy, electrical voltage, and electrical current. Power-of-10 multipliers are also used to define binary data speeds. Thus, for example, 1 kbps (one kilobit per second) is equal to 103, or 1,000, bps (bits per second); 1 Mbps (one megabit per second) is equal to 106, or 1,000,000, bps. (The lowercase k is the technically correct symbol for kilo- when it represents 103, although the uppercase K is often used instead.)
When binary data is stored in memory or fixed media such as a hard drive, diskette, ZIP disk, tape, or CD-ROM, power-of-2 multipliers are used. Technically, the uppercase K should be used for kilo- when it represents 210. Therefore 1 KB (one kilobyte) is 210, or 1,024, bytes; 1 MB (one megabyte) is 220, or 1,048,576 bytes.
The choice of power-of-10 versus power-of-2 prefix multipliers can appear arbitrary. It helps to remember that in common usage, multiples of bits are almost always expressed in powers of 10, while multiples of bytes are almost always expressed in powers of 2. Rarely is data speed expressed in bytes per second, and rarely is data storage or memory expressed in bits. Such usages are considered improper. Confusion is not likely, therefore, provided one adheres strictly to the standard usages of the terms bit and byte.

3G (third generation of mobile telephony)

3G refers to the third generation of mobile telephony (that is, cellular) technology. The third generation, as the name suggests, follows two earlier generations.
The first generation (1G) began in the early 80's with commercial deployment of Advanced Mobile Phone Service (AMPS) cellular networks. Early AMPS networks used Frequency Division Multiplexing Access (FDMA) to carry analog voice over channels in the 800 MHz frequency band.
The second generation (2G) emerged in the 90's when mobile operators deployed two competing digital voice standards. In North America, some operators adopted IS-95, which used Code Division Multiple Access (CDMA) to multiplex up to 64 calls per channel in the 800 MHz band. Across the world, many operators adopted the Global System for Mobile communication (GSM) standard, which used Time Division Multiple Access (TDMA) to multiplex up to 8 calls per channel in the 900 and 1800 MHz bands.
The International Telecommunications Union (ITU) defined the third generation (3G) of mobile telephony standards IMT-2000 to facilitate growth, increase bandwidth, and support more diverse applications. For example, GSM could deliver not only voice, but also circuit-switched data at speeds up to 14.4 Kbps. But to support
mobile multimedia applications, 3G had to deliver packet-switched data with better spectral efficiency, at far greater speeds.
However, to get from 2G to 3G, mobile operators had make "evolutionary" upgrades to existing networks while simultaneously planning their "revolutionary" new mobile broadband networks. This lead to the establishment of two distinct 3G families: 3GPP and 3GPP2.
The 3rd Generation Partnership Project (3GPP) was formed in 1998 to foster deployment of 3G networks that descended from GSM. 3GPP technologies evolved as follows.
• General Packet Radio Service (GPRS) offered speeds up to 114 Kbps.
• Enhanced Data Rates for Global Evolution (EDGE) reached up to 384 Kbps.
• UMTS Wideband CDMA (WCDMA) offered downlink speeds up to 1.92 Mbps.
• High Speed Downlink Packet Access (HSDPA) boosted the downlink to 14Mbps.
• LTE Evolved UMTS Terrestrial Radio Access (E-UTRA) is aiming for 100 Mbps.

GPRS deployments began in 2000, followed by EDGE in 2003. While these technologies are defined by IMT-2000, they are sometimes called "2.5G" because they did not offer multi-megabit data rates. EDGE has now been superceded by HSDPA (and its uplink partner HSUPA). According to the 3GPP, there were 166 HSDPA networks in 75 countries at the end of 2007. The next step for GSM operators: LTE E-UTRA, based on specifications completed in late 2008.
A second organization, the 3rd Generation Partnership Project 2 (3GPP2) -- was formed to help North American and Asian operators using CDMA2000 transition to 3G. 3GPP2 technologies evolved as follows.

• One Times Radio Transmission Technology (1xRTT) offered speeds up to 144 Kbps.
• Evolution Data Optimized (EV-DO) increased downlink speeds up to 2.4 Mbps.
• EV-DO Rev. A boosted downlink peak speed to 3.1 Mbps and reduced latency.
• EV-DO Rev. B can use 2 to 15 channels, with each downlink peaking at 4.9 Mbps.
• Ultra Mobile Broadband (UMB) was slated to reach 288 Mbps on the downlink.

1xRTT became available in 2002, followed by commercial EV-DO Rev. 0 in 2004. Here again, 1xRTT is referred to as "2.5G" because it served as a transitional step to EV-DO. EV-DO standards were extended twice – Revision A services emerged in 2006 and are now being succeeded by products that use Revision B to increase data rates by transmitting over multiple channels. The 3GPP2's next-generation technology, UMB, may not catch on, as many CDMA operators are now planning to evolve to LTE instead.
In fact, LTE and UMB are often called 4G (fourth generation) technologies because they increase downlink speeds an order of magnitude. This label is a bit premature because what constitutes "4G" has not yet been standardized. The ITU is currently considering candidate technologies for inclusion in the 4G IMT-Advanced standard, including LTE, UMB, and WiMAX II. Goals for 4G include data rates of least 100 Mbps, use of OFDMA transmission, and packet-switched delivery of IP-based voice, data, and streaming multimedia

Intel demonstrates McAfee DeepSAFE security platform

Intel has demonstrated a new hardware-based security platform that it says could represent the next phase in the evolution of security defenses.
They’re saying this isn’t a product, it’s a technology, so we’re all kind of waiting to see exactly what’s coming down the pipe.
Andrew Braunberg, research director for enterprise networks and security, Current Analysis.
At the Intel Developer Forum in San Francisco, Intel partially unveiled McAfee DeepSAFE, a security platform it will use to inject security into Intel silicon. The platform is designed to enable McAfee to run security software independent of the operating system to gain visibility into rootkits and other malware that Intel says can easily bypass traditional operating system defenses.
While the use case for such a technology is broad -- from desktops, laptops and smartphones to tiny embedded devices that run mechanical systems -- McAfee nor Intel are releasing details on how the technology will initially be applied.
Vimal Solanki, senior vice president of corporate strategy at Intel, said the DeepSAFE platform is McAfee technology and added that the security vendor, which operates as a subsidiary of Intel, would build specific products using it to connect to Intel chips.  McAfee said last year at its Focus user conference that integrated chip security would be a major part of its product strategy.
“By going below the OS and directly interfacing with the silicon, you now have a whole new vantage point,” Solanki said.  “You’re not at the mercy of operating system to deliver security.”
McAfee will be able to enhance its existing products and deliver new capabilities, he said. In addition, new products will not require new Intel chipsets, Solanki said. “If you bought a PC in recent history, we will deliver solutions that use existing hardware.”
Solanki said Intel has offered the capabilities used by DeepSAFE to enable other security vendors to connect to its chipset. DeepSAFE is designed to enable McAfee to offer CPU event monitoring. It uses Intel VTx technology available on Intel Core i3, i5, i7 processors and vPro platforms. “The silicon capabilities that DeepSAFE leverages are already open and available,” Solanki said.
DeepSAFE can run with Microsoft Windows 7. McAfee anticipates it will run with Windows 8 and is working on a version that runs with the Google Android mobile platform. Solanki and other Intel executives said DeepSAFE could be one of the biggest security innovations in the last 20 years, but industry analysts are downplaying the announcement.
“There is absolutely not enough detail to make a claim like that,” said Andrew Braunberg, research director for enterprise networks and security at Sterling, Va.-based Current Analysis. “They’re saying this isn’t a product, it’s a technology, so we’re all kind of waiting to see exactly what’s coming down the pipe.”
Intel acquired McAfee last summer in a $7.7 billion dollar deal. Since then, McAfee CEO Dave DeWalt has said the company would work on ways to bake its technology into Intel chip sets. At an investor conference in March, DeWalt said the goal would be to find ways to gain visibility into devices that have a tiny footprint, but could be used by attackers to gain access to company networks.
DeWalt told investors the company would work closely with Intel's Wind River subsidiary, a firm Intel acquired in 2009. It makes operating system software for printers, ATM machines, network gateways, satellite systems, mobile devices and other embedded systems. It’s unclear whether DeepSAFE uses Wind River technology, which is designed to run in a tiny footprint and can interface to hardware-based crypto functions.

PCI Council issues point-to-point encryption validation requirements

The PCI Security Standards Council issued point-to-point encryption validation requirements as part of a new program that aims to provide merchants with a list of certified products.
Merchants think that buying point-to-point encryption solutions will reduce the scope of what they’re doing and that’s not always the case.
Bob Russo, general manager, PCI SSC
The PCI encryption requirements document, PCI Point-to-Point Encryption Solution Requirements, was released this week and provides vendors, assessors and merchants, with guidelines for hardware-based point-to-point encryption implementations that support PCI DSS compliance. The Council said its requirements focus on ways to secure and monitor the hardware, develop and maintain secure applications, and use secure key management methodologies.
Point-to-point or end-to-end encryption providers have been touting the benefits of encrypting cardholder data from the time a credit card is swiped at a point-of-sale device to the time it reaches a card processor. But merchants have had no easy way of evaluating individual providers to determine whether the equipment, applications and capabilities meet PCI DSS requirements from the time credit card data is captured to its transmission to a processor and bank systems.  The problem has resulted in some high-profile data security breaches that highlighted some holes in PCI assessments and so-called end-to-end encryption implementations.
Last year the Council called point-to-point encryption implementations too immature to properly evaluate. Bob Russo, general manager of the PCI SSC, said that many merchants have purchased and deployed hardware-based point-to-point encryption systems, prompting the PCI Council to create the validation program. Testing procedures will be released later this year followed by a new training program for qualified security assessors, Russo said.  A certified list of systems will be produced in the spring of 2012.
“Merchants think that buying point-to-point encryption solutions will reduce the scope of what they’re doing and that’s not always the case,” Russo said. “We know people are buying this right now so we wanted to make sure we produced something meaningful as well as a program that certifies some of these things.”
The first phase of the point-to-point encryption program is to focus on requirements for implementations that combine hardware-based encryption PIN transaction security (PTS) devices, where the card is swiped, with hardware security modules, where the decryption takes place. In the second phase, validation requirements will address hybrid systems and pure software point-to-point encryption deployments, Russo said.
The validation document lays out six areas that will be assessed in a point-to-point encryption implementation. The Council will oversee evaluation of the security controls used on the hardware, the applications within the hardware, the environment where encryption hardware is present, the transmissions between the encryption and decryption environments, the decryption environment itself and the key management operations.
The document lays out the responsibilities of device manufacturers, application vendors and point-to-point encryption vendors. It combines validation programs run under the Payment Application Data Security Standards (PA-DSS) and the PCI PIN Transaction Security laboratory, which currently tests point of interaction devices.
A Qualified Security Assessor will evaluate the complete deployment to ensure the hardware, applications and key management processes fully protect card holder data by meeting the PCI DSS requirements, according to the document.
A fully validated point-to-point encryption implementation will reduce the scope of PCI DSS on a merchant’s systems, but the PCI Council cautions that merchants would still be required to be evaluated against PCI DSS to ensure the system is being secured and maintained.
“This scope reduction does not entirely remove or replace all of a merchant‘s PCI DSS compliance or validation obligations,” according to the PCI point-to-point encryption validation document.  “Applicable requirements covering the education of staff handling account data, security policies, third-party relationships, and physical security of media will still apply to merchants that have implemented a validated P2PE solution.”

Oracle-owned MySQL.com hacked, serves malware to visitors

            MySQL.com was compromised and was being used to serve malware to visitors running Windows for a short time Monday. The Oracle-owned site quickly responded to the hack, however, and removed the malware to stop the infections.
             Hackers installed a JavaScript code on the open-source database site that redirected visitors and attacked their systems with a BlackHole exploit kit. Because of the kit, the systems of those visiting the site quietly and automatically loaded the JavaScript file.
                  Security vendor Armorize Technologies discovered the attack early Monday morning. According to Armorize chief executive Wayne Huang  in a blog post, “it exploits the visitor’s browsing platform (the browser, the browser plugins like Adobe Flash, Adobe PDF, etc, Java,…), and upon successful exploitation, permanently installs a piece of malware into the visitor’s machine, without the visitor’s knowledge.”
           Armorize also added that “the visitor doesn’t need to click or agree to anything; simply visiting MySQL.com with a vulnerable browsing platform will result in an infection.”
             Huang claimed that his team had yet to discover what the goal of the attack was but, typically, attackers install malware to create botnet computers that can be rented out or to steal the victims’ passwords. He also added that he didn’t know how dangerous the infection would be to the systems hit and that it would still be running even after a reboot of the machine.
            The middle, redirection site was found to be located in Germany, while the final site that actually housed the malware was located in Sweden.
             The Armorize blog also showed a video explaining how the infection spread on the visitors’ machines. It added that only 4 out of 44 vendors on the VirusTotal site could detect the malware.

Next-gen firewall vs. UTM device: Which is better for Web 2.0 risks?

It seems to me UTMs are basically stateful firewalls with a few additions and that, for Web 2.0 applications, UTM is obsolete. But what would you define as next-generation firewalls, and would you recommend them, in  particular, to protect against Web 2.0 threats?
When stateful inspection firewalls first came on the scene in the 1990s, they revolutionized network security by allowing perimeter protection to move beyond the simple packet-by-packet filtering process used up until that point.  Stateful inspection added intelligence and memory to the firewall.  Instead of simply making independent decisions each time it encountered a packet, the firewall was now context-aware, able to make decisions based upon the information it had gathered about a connection.

You’re correct in pointing out that unified threat management (UTM) products are basically stateful inspection firewalls with some additional security functionality.  You’ll find that these products often consolidate firewall, intrusion prevention, content filtering, antivirus and other security functionality into a single box.  While this approach is not often appropriate for a large enterprise, a UTM device can be a very effective product for smaller or midsize enterprises seeking to limit security expenditures.

Next-generation firewalls (NGFW) represent the next major step in the development of firewall technology.  I’d actually consider them an advancement from stateful inspection technology, rather than comparing them to UTM devices.  A next-gen firewall is designed to combine the functionality of a firewall and an IPS, while adding detailed application awareness into the mix.  Like the introduction of stateful inspection, NGFWs bring additional context to the firewall’s decision-making process by providing it with the capability of understanding the details of the Web application traffic passing through it, taking action to block traffic that might exploit Web application vulnerabilities.

UTMs and NGFWs will peacefully coexist in the marketplace for quite some time, because they serve very different markets.  While UTMs are targeted at the midsize enterprise that doesn’t generally host Web applications, NGFWs will find their home in large enterprises supporting Web 2.0 applications.

27 Sept 2011

Google Plus G+

Google Plus G+

gplus-logoGoogle+ (or Google Plus) is the hot new thing in social networking. Everyone and their little sister have been clamouring for invites to set up their account in this new playground. Unfortunately, users are having to start from scratch without any of their connections, posts or other media cultivated from other social media sites. For example, Facebook is one of the most popular photo sharing websites and so it is likely that users have a lot of photos on it. Unfortunately, it is difficult to quickly export these photos from Facebook to Google Plus as Facebook does not have any automated tool to do so. Fortunately, enterprising users have created a Google Chrome extension that automatically transfers your Facebook photos to Google Plus.

I am confident there are a number of different methods to transfer your photos, however as I am an ardent Google Chrome user, in this article, I have described the method of exporting all your photos using the dedicated Google Chrome extension.


Installing and using the Chrome Extension

Firstly, head over to the Move Your Photos extension page and install it.
picasa-extension
Once the extension is installed, you will notice a small Picasa icon to the right of the Chrome address bar, near the wrench. Click on this icon and you will be presented with a link to login to your Facebook account. This will grant the extension access to the photos on your Facebook account.
picasa-fblogin
After you have logged in, the extension will start fetching all your Facebook photos. If you have any empty albums on Picasa, the extension will also notify you of this and allow you to delete them.
picasa-fetch photos
Once the pictures have been fetched, you can select which albums or individual photos to upload to your Picasa account.
Note the guide used to indicate which photos have been uploaded (green border), which photos are in the queue (yellow border), which photos are not to be uploaded (grey border) and which images are altogether unavailable (red border).
picasa-upload
Unfortunately, the extension appears to be limited to fetching the photos you have uploaded within specific albums. So, all your albums, mobile uploads, profile pictures, and wall photos are included. However, tagged photos are not included.
Once you have decided which photos to upload, select “Upload” from the bottom of the page. You may have to scroll down if you have a lot of photos.
The upload process will take some time depending on the number of photos you have selected.
picasa-uploading
Once all the photos have been uploaded, they will appear in segregated albums on your Web Picasa account.
picasa-albums
The album is set to private by default and only those with a link to the album will be able to view the photos.
You can now choose which of your circles to share the photos with.

Conclusion

This app is useful if all your photos are stored on Facebook but you feel like switching your allegiance to Google+. It would be a lot more useful, however, if it was possible to upload all your photos (including tagged images) from Facebook to Google+. It is likely, that there are privacy settings that is blocking this type of functionality.

Living with Fedora – A Debian/Ubuntu User’s Take on Fedora 15

I’ve been a die-hard Debian fan for about 10 years, and I’ve written several articles on the subject. That said, most of our Linux-savvy readers are Ubuntu users, so that’s been my main desktop OS for as long as I’ve been a MakeTechEasier writer. Ubuntu has always been fine, and generally got the job done without hassle, however this past release (11.04, Natty Narwhal) has been the cause of a rift among many Ubuntu users. This release pushed Unity, their homegrown desktop environment, front and center. Like many others, I’ve never managed to get a feel for Unity. After weighing my options, I decided to jump ship and try out Fedora 15. It’s the first Fedora I’ve tried since Core 1, and things certainly have changed.
usingfed15-logo

Basic Differences

We already spent come time comparing Ubuntu 11.04 and Fedora 15, so I won’t dwell on that here. In short, both have decided to move beyond the traditional Gnome 2 desktop and move into hardware-accelerated modern setups. Ubuntu created Unity and aimed it squarely at casual computer users.
Ubuntu Unity
Ubuntu Unity
whereas Fedora bet their farm on Gnome 3, a newly redesigned and radically different Gnome desktop.
usingfed15-gnome3example
It’s certainly no secret that this author prefers Gnome 3, and that was a major factor in my decision to try Fedora. It’s among the first major distributions to put their full weight behind this relatively new project.
There are of course many differences between Ubuntu and Fedora, but this review will focus on the desktop user experience.

The Good

As mentioned above, the most noticeable difference between Fedora and Ubuntu, or even Fedora 15 compared to earlier versions, is that it now runs the Gnome 3 desktop. This is a near-complete rewrite of the Gnome interface and many of its underlying libraries. It takes advantage of hardware-based 3d acceleration to provide extraordinarily smooth effects when creating, destroying, or moving Windows. In fact, it’s this author’s opinion that Gnome 3 has mastered this aspect better than any other desktop interface from any operating system. There are no visual events at all in Gnome 3 that feel jerky or sudden – absolutely everything is smooth and cozy.
Next up for positive traits is the fact that Gnome 3 can be scripted and themed with… wait for it…JavaScript and CSS ! This means that thousands of developers can immediately apply these popular web technologies to their desktop, customizing it any way they wish using skills they already possess.

The Bad

It’s new. It’s really new, and that has some consequences. Most notably, it means that Gnome 3 lacks a lot of the features users have come to expect from Gnome 2, such as integrated chat and social features and many system configuration options.
Regarding performance, that’s a little bit tricky. I am uncertain whether the problem is caused by Gnome itself, or perhaps some misbehaving application, but on my desktop (and I’m not the only one, judging by some posts I’ve found online) the system seems to get progressively slower the longer it’s used. It’s not normal to have to reboot a Linux system every day, especially to fix a problem like this, but until I’m able to determine the cause of the problem, I can’t rest the blame solely on Gnome.
One thing I can clearly define as a software problem is the apparent trouble Fedora has with saving my application preferences. Google Chrome is repeatedly insisting that it’s not the default browser, and Nautllus refuses to accept any changes to its application associations. No matter how many times I tell it to use VLC for video, it always defaults back to the built-in player next time Nautilus is opened. This is true for all file types I have attempted to change.
Regarding workspace management, I’m torn. The initial builds of Gnome Shell that we originally reviewed here used an excellent grid-based layout (similar to what you can do with Gnome 2 and Compiz) that I adored, and that alone was just about enough to make me fall in love with this desktop setup.
Later builds moved it to a linear approach, and eventually landed on an automatic linear approach. Personally I can’t stand it when my PC makes such decisions for me, so my first task was to set about learning how to disable that functionality.
If extensions were available allowing users to choose which workspace management method they prefer, this would instantly because one of Gnome 3′s killer features. It is my opinion that no other desktop environment offers matching workspace management capability. Unity is pretty good at that, but I’ve seen Gnome do better.

Conclusion

If I was to sum up my opinion on Fedora 15 in one sentence, it’d have to be “Rough, but with great potential“. Gnome 3 is still a baby, and Fedora took a bold step by pushing it to the forefront, and I applaud them for that. As cozy as it may be, there’s still a whole lot of polish left to be done. The front-end is still rough, and the back-end doesn’t seem to have yet caught up with all the changes. If Fedora can manage to take the successes in this release (which are many) and smooth out some of those rough spots (which are also many), then Fedora 16 is likely to pull a lot of users away from Ubuntu permanently. From the looks of it, I’ll be one of them.
I’ve been a die-hard Debian fan for about 10 years, and I’ve written several articles on the subject. That said, most of our Linux-savvy readers are Ubuntu users, so that’s been my main desktop OS for as long as I’ve been a MakeTechEasier writer. Ubuntu has always been fine, and generally got the job done without hassle, however this past release (11.04, Natty Narwhal) has been the cause of a rift among many Ubuntu users. This release pushed Unity, their homegrown desktop environment, front and center. Like many others, I’ve never managed to get a feel for Unity. After weighing my options, I decided to jump ship and try out Fedora 15. It’s the first Fedora I’ve tried since Core 1, and things certainly have changed.