28 Sept 2011

cloud computing

Cloud computing is a general term for anything that involves delivering hosted services over the Internet. These services are broadly divided into three categories: Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS) and Software-as-a-Service (SaaS). The name cloud computing was inspired by the cloud symbol that's often used to represent the Internet in flowcharts and diagrams.
A cloud service has three distinct characteristics that differentiate it from traditional hosting. It is sold on demand, typically by the minute or the hour; it is elastic -- a user can have as much or as little of a service as they want at any given time; and the service is fully managed by the provider (the consumer needs nothing but a personal computer and Internet access). Significant innovations in virtualization and distributed computing, as well as improved access to high-speed Internet and a weak economy, have accelerated interest in cloud computing.
A cloud can be private or public. A public cloud sells services to anyone on the Internet. (Currently, Amazon Web Services is the largest public cloud provider.) A private cloud is a proprietary network or a data center that supplies hosted services to a limited number of people. When a service provider uses public cloud resources to create their private cloud, the result is called a virtual private cloud. Private or public, the goal of cloud computing is to provide easy, scalable access to computing resources and IT services.

Infrastructure-as-a-Service like Amazon Web Services provides virtual server instanceAPI) to start, stop, access and configure their virtual servers and storage. In the enterprise, cloud computing allows a company to pay for only as much capacity as is needed, and bring more online as soon as required. Because this pay-for-what-you-use model resembles the way electricity, fuel and water are consumed, it's sometimes referred to as utility computing.

Platform-as-a-service in the cloud is defined as a set of software and product development tools hosted on the provider's infrastructure. Developers create applications on the provider's platform over the Internet. PaaS providers may use APIs, website portals or gateway software installed on the customer's computer. Force.com, (an outgrowth of Salesforce.com) and GoogleApps are examples of PaaS. Developers need to know that currently, there are not standards for interoperability or data portability in the cloud. Some providers will not allow software created by their customers to be moved off the provider's platform.

In the software-as-a-service cloud model, the vendor supplies the hardware infrastructure, the software product and interacts with the user through a front-end portal. SaaS is a very broad market. Services can be anything from Web-based email to inventory control and database processing. Because the service provider hosts both the application and the data, the end user is free to use the service from anywhere.

See also: hybrid cloud, cloud backup

supply chain management (SCM)

Supply chain management (SCM) is the oversight of materials, information, and finances as they move in a process from supplier to manufacturer to wholesaler to retailer to consumer. Supply chain management involves coordinating and integrating these flows both within and among companies. It is said that the ultimate goal of any effective supply chain management system is to reduce inventory (with the assumption that products are available when needed). As a solution for successful supply chain management, sophisticated software systems with Web interfaces are competing with Web-based application service providers (ASP) who promise to provide part or all of the SCM service for companies who rent their service.
Supply chain management flows can be divided into three main flows:
  • The product flow
  • The information flow
  • The finances flow
The product flow includes the movement of goods from a supplier to a customer, as well as any customer returns or service needs. The information flow involves transmitting orders and updating the status of delivery. The financial flow consists of credit terms, payment schedules, and consignment and title ownership arrangements.
There are two main types of SCM software: planning applications and execution applications. Planning applications use advanced algorithms to determine the best way to fill an order. Execution applications track the physical status of goods, the management of materials, and financial information involving all parties.
Some SCM applications are based on open data models that support the sharing of data both inside and outside the enterprise (this is called the extended enterprise, and includes key suppliers, manufacturers, and end customers of a specific company). This shared data may reside in diverse database systems, or data warehouses, at several different sites and companies.
By sharing this data "upstream" (with a company's suppliers) and "downstream" (with a company's clients), SCM applications have the potential to improve the time-to-market of products, reduce costs, and allow all parties in the supply chain to better manage current resources and plan for future needs.
Increasing numbers of companies are turning to Web sites and Web-based applications as part of the SCM solution. A number of major Web sites offer e-procurement marketplaces where manufacturers can trade and even make auction bids with suppliers.

TCP/IP (Transmission Control Protocol/Internet Protocol)

TCP/IP (Transmission Control Protocol/Internet Protocol) is the basic communication language or protocol of the Internet. It can also be used as a communications protocol in a private network (either an intranet or an extranet). When you are set up with direct access to the Internet, your computer is provided with a copy of the TCP/IP program just as every other computer that you may send messages to or get information from also has a copy of TCP/IP.
TCP/IP is a two-layer program. The higher layer, Transmission Control Protocol, manages the assembling of a message or file into smaller packets that are transmitted over the Internet and received by a TCP layer that reassembles the packets into the original message. The lower layer, Internet Protocol, handles the address part of each packet so that it gets to the right destination. Each gateway computer on the network checks this address to see where to forward the message. Even though some packets from the same message are routed differently than others, they'll be reassembled at the destination.
TCP/IP uses the client/server model of communication in which a computer user (a client) requests and is provided a service (such as sending a Web page) by another computer (a server) in the network. TCP/IP communication is primarily point-to-point, meaning each communication is from one point (or host computer) in the network to another point or host computer. TCP/IP and the higher-level applications that use it are collectively said to be "stateless" because each client request is considered a new request unrelated to any previous one (unlike ordinary phone conversations that require a dedicated connection for the call duration). Being stateless frees network paths so that everyone can use them continuously. (Note that the TCP layer itself is not stateless as far as any one message is concerned. Its connection remains in place until all packets in a message have been received.)
Many Internet users are familiar with the even higher layer application protocols that use TCP/IP to get to the Internet. These include the World Wide Web's Hypertext Transfer Protocol (HTTP), the File Transfer Protocol (FTP), Telnet (Telnet) which lets you logon to remote computers, and the Simple Mail Transfer Protocol (SMTP). These and other protocols are often packaged together with TCP/IP as a "suite."
Personal computer users with an analog phone modem connection to the Internet usually get to the Internet through the Serial Line Internet Protocol (SLIP) or the Point-to-Point Protocol (PPP). These protocols encapsulate the IP packets so that they can be sent over the dial-up phone connection to an access provider's modem.
Protocols related to TCP/IP include the User Datagram Protocol (UDP), which is used instead of TCP for special purposes. Other protocols are used by network host computers for exchanging router information. These include the Internet Control Message Protocol (ICMP), the Interior Gateway Protocol (IGP), the Exterior Gateway Protocol (EGP), and the Border Gateway Protocol (BGP).

Kilo, mega, giga, tera, peta, and all that

Also see Kibi, mebi, gibi, tebi, pebi, and all that, which are relatively new prefixes designed to express power-of-two multiples.
Kilo, mega, giga, tera, and peta are among the list of prefixes that are used to denote the quantity of something, such as, in computing and telecommunications, a byte or a bit. Sometimes called prefix multipliers, these prefixes are also used in electronics and physics. Each multiplier consists of a one-letter abbreviation and the prefix that it stands for.
In communications, electronics, and physics, multipliers are defined in powers of 10 from 10-24 to 1024, proceeding in increments of three orders of magnitude (103 or 1,000). In IT and data storage, multipliers are defined in powers of 2 from 210 to 280, proceeding in increments of ten orders of magnitude (210 or 1,024). These multipliers are denoted in the following table.

Prefix Symbol(s) Power of 10 Power of 2
yocto- y 10-24 * --
zepto- z 10-21 * --
atto- a 10-18 * --
femto- f 10-15 * --
pico- p

Learn More


10-12 * --
nano- n 10-9 * --
micro- m 10-6 * --
milli- m 10-3 * --
centi- c 10-2 * --
deci- d 10-1 * --
(none) -- 100 20
deka- D 101 * --
hecto- h 102 * --
kilo- k or K ** 103 210
mega- M 106 220
giga- G 109 230
tera- T 1012 240
peta- P 1015 250
exa- E 1018 * 260
zetta- Z 1021 * 270
yotta- Y 1024 * 280
* Not generally used to express data speed
** k = 103 and K = 210
Examples of quantities or phenomena in which power-of-10 prefix multipliers apply include frequency (including computer clock speeds), physical mass, power, energy, electrical voltage, and electrical current. Power-of-10 multipliers are also used to define binary data speeds. Thus, for example, 1 kbps (one kilobit per second) is equal to 103, or 1,000, bps (bits per second); 1 Mbps (one megabit per second) is equal to 106, or 1,000,000, bps. (The lowercase k is the technically correct symbol for kilo- when it represents 103, although the uppercase K is often used instead.)
When binary data is stored in memory or fixed media such as a hard drive, diskette, ZIP disk, tape, or CD-ROM, power-of-2 multipliers are used. Technically, the uppercase K should be used for kilo- when it represents 210. Therefore 1 KB (one kilobyte) is 210, or 1,024, bytes; 1 MB (one megabyte) is 220, or 1,048,576 bytes.
The choice of power-of-10 versus power-of-2 prefix multipliers can appear arbitrary. It helps to remember that in common usage, multiples of bits are almost always expressed in powers of 10, while multiples of bytes are almost always expressed in powers of 2. Rarely is data speed expressed in bytes per second, and rarely is data storage or memory expressed in bits. Such usages are considered improper. Confusion is not likely, therefore, provided one adheres strictly to the standard usages of the terms bit and byte.

3G (third generation of mobile telephony)

3G refers to the third generation of mobile telephony (that is, cellular) technology. The third generation, as the name suggests, follows two earlier generations.
The first generation (1G) began in the early 80's with commercial deployment of Advanced Mobile Phone Service (AMPS) cellular networks. Early AMPS networks used Frequency Division Multiplexing Access (FDMA) to carry analog voice over channels in the 800 MHz frequency band.
The second generation (2G) emerged in the 90's when mobile operators deployed two competing digital voice standards. In North America, some operators adopted IS-95, which used Code Division Multiple Access (CDMA) to multiplex up to 64 calls per channel in the 800 MHz band. Across the world, many operators adopted the Global System for Mobile communication (GSM) standard, which used Time Division Multiple Access (TDMA) to multiplex up to 8 calls per channel in the 900 and 1800 MHz bands.
The International Telecommunications Union (ITU) defined the third generation (3G) of mobile telephony standards IMT-2000 to facilitate growth, increase bandwidth, and support more diverse applications. For example, GSM could deliver not only voice, but also circuit-switched data at speeds up to 14.4 Kbps. But to support
mobile multimedia applications, 3G had to deliver packet-switched data with better spectral efficiency, at far greater speeds.
However, to get from 2G to 3G, mobile operators had make "evolutionary" upgrades to existing networks while simultaneously planning their "revolutionary" new mobile broadband networks. This lead to the establishment of two distinct 3G families: 3GPP and 3GPP2.
The 3rd Generation Partnership Project (3GPP) was formed in 1998 to foster deployment of 3G networks that descended from GSM. 3GPP technologies evolved as follows.
• General Packet Radio Service (GPRS) offered speeds up to 114 Kbps.
• Enhanced Data Rates for Global Evolution (EDGE) reached up to 384 Kbps.
• UMTS Wideband CDMA (WCDMA) offered downlink speeds up to 1.92 Mbps.
• High Speed Downlink Packet Access (HSDPA) boosted the downlink to 14Mbps.
• LTE Evolved UMTS Terrestrial Radio Access (E-UTRA) is aiming for 100 Mbps.

GPRS deployments began in 2000, followed by EDGE in 2003. While these technologies are defined by IMT-2000, they are sometimes called "2.5G" because they did not offer multi-megabit data rates. EDGE has now been superceded by HSDPA (and its uplink partner HSUPA). According to the 3GPP, there were 166 HSDPA networks in 75 countries at the end of 2007. The next step for GSM operators: LTE E-UTRA, based on specifications completed in late 2008.
A second organization, the 3rd Generation Partnership Project 2 (3GPP2) -- was formed to help North American and Asian operators using CDMA2000 transition to 3G. 3GPP2 technologies evolved as follows.

• One Times Radio Transmission Technology (1xRTT) offered speeds up to 144 Kbps.
• Evolution Data Optimized (EV-DO) increased downlink speeds up to 2.4 Mbps.
• EV-DO Rev. A boosted downlink peak speed to 3.1 Mbps and reduced latency.
• EV-DO Rev. B can use 2 to 15 channels, with each downlink peaking at 4.9 Mbps.
• Ultra Mobile Broadband (UMB) was slated to reach 288 Mbps on the downlink.

1xRTT became available in 2002, followed by commercial EV-DO Rev. 0 in 2004. Here again, 1xRTT is referred to as "2.5G" because it served as a transitional step to EV-DO. EV-DO standards were extended twice – Revision A services emerged in 2006 and are now being succeeded by products that use Revision B to increase data rates by transmitting over multiple channels. The 3GPP2's next-generation technology, UMB, may not catch on, as many CDMA operators are now planning to evolve to LTE instead.
In fact, LTE and UMB are often called 4G (fourth generation) technologies because they increase downlink speeds an order of magnitude. This label is a bit premature because what constitutes "4G" has not yet been standardized. The ITU is currently considering candidate technologies for inclusion in the 4G IMT-Advanced standard, including LTE, UMB, and WiMAX II. Goals for 4G include data rates of least 100 Mbps, use of OFDMA transmission, and packet-switched delivery of IP-based voice, data, and streaming multimedia

Intel demonstrates McAfee DeepSAFE security platform

Intel has demonstrated a new hardware-based security platform that it says could represent the next phase in the evolution of security defenses.
They’re saying this isn’t a product, it’s a technology, so we’re all kind of waiting to see exactly what’s coming down the pipe.
Andrew Braunberg, research director for enterprise networks and security, Current Analysis.
At the Intel Developer Forum in San Francisco, Intel partially unveiled McAfee DeepSAFE, a security platform it will use to inject security into Intel silicon. The platform is designed to enable McAfee to run security software independent of the operating system to gain visibility into rootkits and other malware that Intel says can easily bypass traditional operating system defenses.
While the use case for such a technology is broad -- from desktops, laptops and smartphones to tiny embedded devices that run mechanical systems -- McAfee nor Intel are releasing details on how the technology will initially be applied.
Vimal Solanki, senior vice president of corporate strategy at Intel, said the DeepSAFE platform is McAfee technology and added that the security vendor, which operates as a subsidiary of Intel, would build specific products using it to connect to Intel chips.  McAfee said last year at its Focus user conference that integrated chip security would be a major part of its product strategy.
“By going below the OS and directly interfacing with the silicon, you now have a whole new vantage point,” Solanki said.  “You’re not at the mercy of operating system to deliver security.”
McAfee will be able to enhance its existing products and deliver new capabilities, he said. In addition, new products will not require new Intel chipsets, Solanki said. “If you bought a PC in recent history, we will deliver solutions that use existing hardware.”
Solanki said Intel has offered the capabilities used by DeepSAFE to enable other security vendors to connect to its chipset. DeepSAFE is designed to enable McAfee to offer CPU event monitoring. It uses Intel VTx technology available on Intel Core i3, i5, i7 processors and vPro platforms. “The silicon capabilities that DeepSAFE leverages are already open and available,” Solanki said.
DeepSAFE can run with Microsoft Windows 7. McAfee anticipates it will run with Windows 8 and is working on a version that runs with the Google Android mobile platform. Solanki and other Intel executives said DeepSAFE could be one of the biggest security innovations in the last 20 years, but industry analysts are downplaying the announcement.
“There is absolutely not enough detail to make a claim like that,” said Andrew Braunberg, research director for enterprise networks and security at Sterling, Va.-based Current Analysis. “They’re saying this isn’t a product, it’s a technology, so we’re all kind of waiting to see exactly what’s coming down the pipe.”
Intel acquired McAfee last summer in a $7.7 billion dollar deal. Since then, McAfee CEO Dave DeWalt has said the company would work on ways to bake its technology into Intel chip sets. At an investor conference in March, DeWalt said the goal would be to find ways to gain visibility into devices that have a tiny footprint, but could be used by attackers to gain access to company networks.
DeWalt told investors the company would work closely with Intel's Wind River subsidiary, a firm Intel acquired in 2009. It makes operating system software for printers, ATM machines, network gateways, satellite systems, mobile devices and other embedded systems. It’s unclear whether DeepSAFE uses Wind River technology, which is designed to run in a tiny footprint and can interface to hardware-based crypto functions.

PCI Council issues point-to-point encryption validation requirements

The PCI Security Standards Council issued point-to-point encryption validation requirements as part of a new program that aims to provide merchants with a list of certified products.
Merchants think that buying point-to-point encryption solutions will reduce the scope of what they’re doing and that’s not always the case.
Bob Russo, general manager, PCI SSC
The PCI encryption requirements document, PCI Point-to-Point Encryption Solution Requirements, was released this week and provides vendors, assessors and merchants, with guidelines for hardware-based point-to-point encryption implementations that support PCI DSS compliance. The Council said its requirements focus on ways to secure and monitor the hardware, develop and maintain secure applications, and use secure key management methodologies.
Point-to-point or end-to-end encryption providers have been touting the benefits of encrypting cardholder data from the time a credit card is swiped at a point-of-sale device to the time it reaches a card processor. But merchants have had no easy way of evaluating individual providers to determine whether the equipment, applications and capabilities meet PCI DSS requirements from the time credit card data is captured to its transmission to a processor and bank systems.  The problem has resulted in some high-profile data security breaches that highlighted some holes in PCI assessments and so-called end-to-end encryption implementations.
Last year the Council called point-to-point encryption implementations too immature to properly evaluate. Bob Russo, general manager of the PCI SSC, said that many merchants have purchased and deployed hardware-based point-to-point encryption systems, prompting the PCI Council to create the validation program. Testing procedures will be released later this year followed by a new training program for qualified security assessors, Russo said.  A certified list of systems will be produced in the spring of 2012.
“Merchants think that buying point-to-point encryption solutions will reduce the scope of what they’re doing and that’s not always the case,” Russo said. “We know people are buying this right now so we wanted to make sure we produced something meaningful as well as a program that certifies some of these things.”
The first phase of the point-to-point encryption program is to focus on requirements for implementations that combine hardware-based encryption PIN transaction security (PTS) devices, where the card is swiped, with hardware security modules, where the decryption takes place. In the second phase, validation requirements will address hybrid systems and pure software point-to-point encryption deployments, Russo said.
The validation document lays out six areas that will be assessed in a point-to-point encryption implementation. The Council will oversee evaluation of the security controls used on the hardware, the applications within the hardware, the environment where encryption hardware is present, the transmissions between the encryption and decryption environments, the decryption environment itself and the key management operations.
The document lays out the responsibilities of device manufacturers, application vendors and point-to-point encryption vendors. It combines validation programs run under the Payment Application Data Security Standards (PA-DSS) and the PCI PIN Transaction Security laboratory, which currently tests point of interaction devices.
A Qualified Security Assessor will evaluate the complete deployment to ensure the hardware, applications and key management processes fully protect card holder data by meeting the PCI DSS requirements, according to the document.
A fully validated point-to-point encryption implementation will reduce the scope of PCI DSS on a merchant’s systems, but the PCI Council cautions that merchants would still be required to be evaluated against PCI DSS to ensure the system is being secured and maintained.
“This scope reduction does not entirely remove or replace all of a merchant‘s PCI DSS compliance or validation obligations,” according to the PCI point-to-point encryption validation document.  “Applicable requirements covering the education of staff handling account data, security policies, third-party relationships, and physical security of media will still apply to merchants that have implemented a validated P2PE solution.”

Oracle-owned MySQL.com hacked, serves malware to visitors

            MySQL.com was compromised and was being used to serve malware to visitors running Windows for a short time Monday. The Oracle-owned site quickly responded to the hack, however, and removed the malware to stop the infections.
             Hackers installed a JavaScript code on the open-source database site that redirected visitors and attacked their systems with a BlackHole exploit kit. Because of the kit, the systems of those visiting the site quietly and automatically loaded the JavaScript file.
                  Security vendor Armorize Technologies discovered the attack early Monday morning. According to Armorize chief executive Wayne Huang  in a blog post, “it exploits the visitor’s browsing platform (the browser, the browser plugins like Adobe Flash, Adobe PDF, etc, Java,…), and upon successful exploitation, permanently installs a piece of malware into the visitor’s machine, without the visitor’s knowledge.”
           Armorize also added that “the visitor doesn’t need to click or agree to anything; simply visiting MySQL.com with a vulnerable browsing platform will result in an infection.”
             Huang claimed that his team had yet to discover what the goal of the attack was but, typically, attackers install malware to create botnet computers that can be rented out or to steal the victims’ passwords. He also added that he didn’t know how dangerous the infection would be to the systems hit and that it would still be running even after a reboot of the machine.
            The middle, redirection site was found to be located in Germany, while the final site that actually housed the malware was located in Sweden.
             The Armorize blog also showed a video explaining how the infection spread on the visitors’ machines. It added that only 4 out of 44 vendors on the VirusTotal site could detect the malware.

Next-gen firewall vs. UTM device: Which is better for Web 2.0 risks?

It seems to me UTMs are basically stateful firewalls with a few additions and that, for Web 2.0 applications, UTM is obsolete. But what would you define as next-generation firewalls, and would you recommend them, in  particular, to protect against Web 2.0 threats?
When stateful inspection firewalls first came on the scene in the 1990s, they revolutionized network security by allowing perimeter protection to move beyond the simple packet-by-packet filtering process used up until that point.  Stateful inspection added intelligence and memory to the firewall.  Instead of simply making independent decisions each time it encountered a packet, the firewall was now context-aware, able to make decisions based upon the information it had gathered about a connection.

You’re correct in pointing out that unified threat management (UTM) products are basically stateful inspection firewalls with some additional security functionality.  You’ll find that these products often consolidate firewall, intrusion prevention, content filtering, antivirus and other security functionality into a single box.  While this approach is not often appropriate for a large enterprise, a UTM device can be a very effective product for smaller or midsize enterprises seeking to limit security expenditures.

Next-generation firewalls (NGFW) represent the next major step in the development of firewall technology.  I’d actually consider them an advancement from stateful inspection technology, rather than comparing them to UTM devices.  A next-gen firewall is designed to combine the functionality of a firewall and an IPS, while adding detailed application awareness into the mix.  Like the introduction of stateful inspection, NGFWs bring additional context to the firewall’s decision-making process by providing it with the capability of understanding the details of the Web application traffic passing through it, taking action to block traffic that might exploit Web application vulnerabilities.

UTMs and NGFWs will peacefully coexist in the marketplace for quite some time, because they serve very different markets.  While UTMs are targeted at the midsize enterprise that doesn’t generally host Web applications, NGFWs will find their home in large enterprises supporting Web 2.0 applications.

27 Sept 2011

Google Plus G+

Google Plus G+

gplus-logoGoogle+ (or Google Plus) is the hot new thing in social networking. Everyone and their little sister have been clamouring for invites to set up their account in this new playground. Unfortunately, users are having to start from scratch without any of their connections, posts or other media cultivated from other social media sites. For example, Facebook is one of the most popular photo sharing websites and so it is likely that users have a lot of photos on it. Unfortunately, it is difficult to quickly export these photos from Facebook to Google Plus as Facebook does not have any automated tool to do so. Fortunately, enterprising users have created a Google Chrome extension that automatically transfers your Facebook photos to Google Plus.

I am confident there are a number of different methods to transfer your photos, however as I am an ardent Google Chrome user, in this article, I have described the method of exporting all your photos using the dedicated Google Chrome extension.


Installing and using the Chrome Extension

Firstly, head over to the Move Your Photos extension page and install it.
picasa-extension
Once the extension is installed, you will notice a small Picasa icon to the right of the Chrome address bar, near the wrench. Click on this icon and you will be presented with a link to login to your Facebook account. This will grant the extension access to the photos on your Facebook account.
picasa-fblogin
After you have logged in, the extension will start fetching all your Facebook photos. If you have any empty albums on Picasa, the extension will also notify you of this and allow you to delete them.
picasa-fetch photos
Once the pictures have been fetched, you can select which albums or individual photos to upload to your Picasa account.
Note the guide used to indicate which photos have been uploaded (green border), which photos are in the queue (yellow border), which photos are not to be uploaded (grey border) and which images are altogether unavailable (red border).
picasa-upload
Unfortunately, the extension appears to be limited to fetching the photos you have uploaded within specific albums. So, all your albums, mobile uploads, profile pictures, and wall photos are included. However, tagged photos are not included.
Once you have decided which photos to upload, select “Upload” from the bottom of the page. You may have to scroll down if you have a lot of photos.
The upload process will take some time depending on the number of photos you have selected.
picasa-uploading
Once all the photos have been uploaded, they will appear in segregated albums on your Web Picasa account.
picasa-albums
The album is set to private by default and only those with a link to the album will be able to view the photos.
You can now choose which of your circles to share the photos with.

Conclusion

This app is useful if all your photos are stored on Facebook but you feel like switching your allegiance to Google+. It would be a lot more useful, however, if it was possible to upload all your photos (including tagged images) from Facebook to Google+. It is likely, that there are privacy settings that is blocking this type of functionality.

Living with Fedora – A Debian/Ubuntu User’s Take on Fedora 15

I’ve been a die-hard Debian fan for about 10 years, and I’ve written several articles on the subject. That said, most of our Linux-savvy readers are Ubuntu users, so that’s been my main desktop OS for as long as I’ve been a MakeTechEasier writer. Ubuntu has always been fine, and generally got the job done without hassle, however this past release (11.04, Natty Narwhal) has been the cause of a rift among many Ubuntu users. This release pushed Unity, their homegrown desktop environment, front and center. Like many others, I’ve never managed to get a feel for Unity. After weighing my options, I decided to jump ship and try out Fedora 15. It’s the first Fedora I’ve tried since Core 1, and things certainly have changed.
usingfed15-logo

Basic Differences

We already spent come time comparing Ubuntu 11.04 and Fedora 15, so I won’t dwell on that here. In short, both have decided to move beyond the traditional Gnome 2 desktop and move into hardware-accelerated modern setups. Ubuntu created Unity and aimed it squarely at casual computer users.
Ubuntu Unity
Ubuntu Unity
whereas Fedora bet their farm on Gnome 3, a newly redesigned and radically different Gnome desktop.
usingfed15-gnome3example
It’s certainly no secret that this author prefers Gnome 3, and that was a major factor in my decision to try Fedora. It’s among the first major distributions to put their full weight behind this relatively new project.
There are of course many differences between Ubuntu and Fedora, but this review will focus on the desktop user experience.

The Good

As mentioned above, the most noticeable difference between Fedora and Ubuntu, or even Fedora 15 compared to earlier versions, is that it now runs the Gnome 3 desktop. This is a near-complete rewrite of the Gnome interface and many of its underlying libraries. It takes advantage of hardware-based 3d acceleration to provide extraordinarily smooth effects when creating, destroying, or moving Windows. In fact, it’s this author’s opinion that Gnome 3 has mastered this aspect better than any other desktop interface from any operating system. There are no visual events at all in Gnome 3 that feel jerky or sudden – absolutely everything is smooth and cozy.
Next up for positive traits is the fact that Gnome 3 can be scripted and themed with… wait for it…JavaScript and CSS ! This means that thousands of developers can immediately apply these popular web technologies to their desktop, customizing it any way they wish using skills they already possess.

The Bad

It’s new. It’s really new, and that has some consequences. Most notably, it means that Gnome 3 lacks a lot of the features users have come to expect from Gnome 2, such as integrated chat and social features and many system configuration options.
Regarding performance, that’s a little bit tricky. I am uncertain whether the problem is caused by Gnome itself, or perhaps some misbehaving application, but on my desktop (and I’m not the only one, judging by some posts I’ve found online) the system seems to get progressively slower the longer it’s used. It’s not normal to have to reboot a Linux system every day, especially to fix a problem like this, but until I’m able to determine the cause of the problem, I can’t rest the blame solely on Gnome.
One thing I can clearly define as a software problem is the apparent trouble Fedora has with saving my application preferences. Google Chrome is repeatedly insisting that it’s not the default browser, and Nautllus refuses to accept any changes to its application associations. No matter how many times I tell it to use VLC for video, it always defaults back to the built-in player next time Nautilus is opened. This is true for all file types I have attempted to change.
Regarding workspace management, I’m torn. The initial builds of Gnome Shell that we originally reviewed here used an excellent grid-based layout (similar to what you can do with Gnome 2 and Compiz) that I adored, and that alone was just about enough to make me fall in love with this desktop setup.
Later builds moved it to a linear approach, and eventually landed on an automatic linear approach. Personally I can’t stand it when my PC makes such decisions for me, so my first task was to set about learning how to disable that functionality.
If extensions were available allowing users to choose which workspace management method they prefer, this would instantly because one of Gnome 3′s killer features. It is my opinion that no other desktop environment offers matching workspace management capability. Unity is pretty good at that, but I’ve seen Gnome do better.

Conclusion

If I was to sum up my opinion on Fedora 15 in one sentence, it’d have to be “Rough, but with great potential“. Gnome 3 is still a baby, and Fedora took a bold step by pushing it to the forefront, and I applaud them for that. As cozy as it may be, there’s still a whole lot of polish left to be done. The front-end is still rough, and the back-end doesn’t seem to have yet caught up with all the changes. If Fedora can manage to take the successes in this release (which are many) and smooth out some of those rough spots (which are also many), then Fedora 16 is likely to pull a lot of users away from Ubuntu permanently. From the looks of it, I’ll be one of them.
I’ve been a die-hard Debian fan for about 10 years, and I’ve written several articles on the subject. That said, most of our Linux-savvy readers are Ubuntu users, so that’s been my main desktop OS for as long as I’ve been a MakeTechEasier writer. Ubuntu has always been fine, and generally got the job done without hassle, however this past release (11.04, Natty Narwhal) has been the cause of a rift among many Ubuntu users. This release pushed Unity, their homegrown desktop environment, front and center. Like many others, I’ve never managed to get a feel for Unity. After weighing my options, I decided to jump ship and try out Fedora 15. It’s the first Fedora I’ve tried since Core 1, and things certainly have changed.

A New Generation In Cloud Computing

Google today released information about their newest product. To be released later this year, Google Wave has the potential to truly revolutionize cloud computing. In the keynote from today’s I/O conference, some of the Google team did a demo on what might be the most significant piece of technology released this year.
Unlike normal internet applications like email, photos, and documents, Google Wave is in real-time. It is a fully collaborative system. Instead of the standard “threads” or individual emails, conversations are organized as “Waves”, which can be thought of as living conversations. It works similar to Email and IM, except that typing is in real-time (optionally), cutting down the time it takes to process the other users information. Also, you can insert a response to an entry at any point, making responding to an earlier message a snap. With email, you would have to go and dig up the email within the thread and edit it all out. With Google Wave, you can add it at any point in time.
Wave is multi-user, so you can add anyone to a “Wave”. The neat thing is that you can specify access settings to each individual message in a “Wave.” So if you have 3 of your buddies talking about something and you want to give a brief aside to one of them, you have that option.
Other features include maps, blogs, pictures (including drag and drop from desktop, provided you have Google Gears installed), and a full API, allowing you to write your own apps for it.
Google Wave is opensource, making development for it easier.
I will be posting more information on Google Wave as the project becomes more developed.
You can view the Google Wave home page, with the keynote from Google’s I/O conference, here: wave.google.com/. From this page you will be able to link to some of the preview pages.

Cloud Computing to Bring about the Third Wave of IT Reform

When Steve Jobs announced Apple's icloud service earlier this month, the word "cloud" has soon become a catchphrase here in China. Even though the majority of people are not very clear about the power of the "cloud", industry insiders believe the third wave of reform in the information technology industry is approaching. Our reporter Zeng Liang filed this report read by Chen Zhe.





Most people don't know much about the technology behind cloud computing, but they actually use it all the time.
When you upload a photo or video to a website, post an article on a blog or email, you're using cloud computing. The platform that you use for your photo, video, blog and email are all part of the "Cloud". And the cloud can offer more.
Professor Yangyang is with the Cloud Computing Expert Association of Chinese Institutes of Electronics. He explains how cloud computing works.

"It's like a huge super computer which owns the capacity of millions of computers and servers and can be accessed through the internet using cell phones, notebook etc."
Some say Cloud computing is like using public utilities such as water.
When you need to use it, just open the tap, and you get as much as you want.
You don't have to pay for the water that you don't use.
With cloud computing your data is held in huge data centers spread around the globe.
US IT giant Apple's recent release of its iCloud service has made the media zero in on cloud computing.
In China, cloud computing is also exploding onto the scene. The past two years have seen a number of companies entering the sector.
Recently, the first commercial data module, named "Cloud Container," has been launched in the Yizhuang area of Beijing by the Cloud Frame Scientific & Technical Corporation.
The Cloud Container is the size of a container for leftovers.
The CEO of the company that produces it, Xu Hongzhong says it can accommodate as much data as four national libraries and millions of hi-def movies.

"Our new cloud box is much easier to install. It takes only 4 weeks to get the data centre up and running, which was unimaginable before. This efficiency is important for cloud computing."

They have already received orders from the government and several industrial companies.
As an emerging strategic industrial zone, Yizhuang in southeastern Beijing has prioritized the cloud-computing industry.
Currently, more than ten enterprises in the cloud-computing industry have been set up in the Yizhuang "cloud valley", facilitated by the municipal government of Beijing.

Professor Yangyang says this shows the government is aware of the importance of developing the industry.
"The government has realized that cloud computing will bring about reform in the IT sector, as well as opportunities. Cloud computing will make it easier to share information, and help save money. It makes up for the weak links in China's IT sector".
He believes the wide application of cloud computing in China will be coming soon.
However, as the sector grows rapidly, many people also worry about possible information leaks.
But Professor angyang says cloud computing actually makes information safer.
"I think we are not short of the very technology to protect the information from disclosure. The unified management of cloud computing will help us achieve the goal. The problem is how we define privacy. We need unified regulations and laws to protect the information on the cloud."
The potential risk doesn't seem to be a problem for the ballooning market.
Zhou Zhenggang is a research manager at International Data Center, a market research company.
He says cloud computing in China is developing at a much faster pace than the rest of the world.
"China's cloud computing market has grown by 58% year on year, doubling that of the global market. The public cloud industry has generated as much as 600 million US dollars from 2010 to 2011".
Zhou Zhenggang says while China is keeping up with the development of cloud computing in other developed countries, the only problem is the Internet.
"I think the technology gap in cloud computing is not that wide. However, the internet infrastructure in China is not as established as in developed countries, like the bandwidth. So we have yet to see wider applications."
He says in the future, more and more small and medium sized companies will turn to public cloud services.
Large state-owned companies which have already set up their own data centers will gradually build their private clouds.

cmd -line Tips and Tricks

Open CMD anywhere:
Go to the place in question in Windows Explorer, then press Shift + Right-Click. You will now notice that in the list of options, there will appear "Open command Windows here."

Open an Elevated Command Prompt:
Click on Start and in the search bar, type "cmd". Press on Ctrl + Shift + Enter. Click on "Yes" when User Account Control pops up. You will now notice that you are in C:\WINDOWS\system32.

Drag and Drop to Command Prompt:
From Windows Explorer, you can drag and drop files into an open Command Prompt. That will display the full pathname to the file in question. Plus, if you Enter, you can execute the file.

Copy and paste from the command line:
Right-Click and select Mark. Now, drag over the area you want to copy, hit Enter and the text is copied to the clipboard. Similarly, you can click on the icon in the title bar and choose Paste to paste the text you already have on the clipboard. 

Hit F7 for command line history:
Hit F7 and you will get a complete list of commands that you executed. Use the arrow keys to highlight the command you want to run again or just hit the number key corresponding to the command that you want to execute.

Run multiple commands:
You can run multiple command by separating them with &&. Note that this doesn’t run the commands simultaneously. Instead, the command towards the left is run first and if it completes successfully then the second command will run. If the first command fails, then the second command will not run.

Ex: MKDIR C:\FOLDER && RD C:\FOLDER

Go fullscreen:
Hit Alt+Enter and now you have the entire screen to enter your commands and view the output. Sadly. this doesn’t always work.

Navigate the HDD:
To go somewhere on the HDD, type CD C:\%Where you want to go%. You can also type
CD %where you want to go INSIDE the dir%.

Windows 7 Cmd Line Common useful Commands

ASSOC: Displays or modifies file extension associations.
ATTRIB: Displays or changes file attributes.
BREAK: Sets or clears extended CTRL+C checking.
BCDEDIT: Sets properties in boot database to control boot loading.
CACLS: Displays or modifies access control lists (ACLs) of files.
CALL: Calls one batch program from another.
CD: Displays the name of or changes the current directory.
CHCP: Displays or sets the active code page number.
CHDIR: Displays the name of or changes the current directory.
CHKDSK: Checks a disk and displays a status report.
CHKNTFS: Displays or modifies the checking of disk at boot time.
CLS: Clears the screen.
CMD: Starts a new instance of the Windows command interpreter.
COLOR: Sets the default console foreground and background colors.
COMP: Compares the contents of two files or sets of files.
COMPACT: Displays or alters the compression of files on NTFS partitions.
CONVERT: Converts FAT volumes to NTFS. You cannot convert the
current drive.
COPY: Copies one or more files to another location.
DATE: Displays or sets the date.
DEL: Deletes one or more files.
DIR: Displays a list of files and subdirectories in a directory.
DISKCOMP: Compares the contents of two floppy disks.
DISKCOPY: Copies the contents of one floppy disk to another.
DISKPART: Displays or configures Disk Partition properties.
DOSKEY: Edits command lines, recalls Windows commands, and
creates macros.
DRIVERQUERY: Displays current device driver status and properties.
ECHO: Displays messages, or turns command echoing on or off.
ENDLOCAL: Ends localization of environment changes in a batch file.
ERASE: Deletes one or more files.
EXIT: Quits the CMD.EXE program (command interpreter).
FC: Compares two files or sets of files, and displays the
differences between them.
FIND: Searches for a text string in a file or files.
FINDSTR: Searches for strings in files.
FOR: Runs a specified command for each file in a set of files.
FORMAT: Formats a disk for use with Windows.
FSUTIL: Displays or configures the file system properties.
FTYPE: Displays or modifies file types used in file extension
associations.
GOTO: Directs the Windows command interpreter to a labeled line in
a batch program.                                                                                                                                                                     
GPRESULT: Displays Group Policy information for machine or user.
GRAFTABL: Enables Windows to display an extended character set in
graphics mode.
HELP: Provides Help information for Windows commands.
ICACLS: Display, modify, backup, or restore ACLs for files and
directories.
IF: Performs conditional processing in batch programs.
LABEL: Creates, changes, or deletes the volume label of a disk.
MD: Creates a directory.
MKDIR: Creates a directory.
MKLINK: Creates Symbolic Links and Hard Links
MODE: Configures a system device.
MORE: Displays output one screen at a time.
MOVE: Moves one or more files from one directory to another
directory.
OPENFILES: Displays files opened by remote users for a file share.
PATH: Displays or sets a search path for executable files.
PAUSE: Suspends processing of a batch file and displays a message.
POPD: Restores the previous value of the current directory saved by
PUSHD.
PRINT: Prints a text file.
PROMPT: Changes the Windows command prompt.
PUSHD: Saves the current directory then changes it.
RD: Removes a directory.
RECOVER: Recovers readable information from a bad or defective disk.
REM: Records comments (remarks) in batch files or CONFIG.SYS.
REN: Renames a file or files.
RENAME: Renames a file or files.
REPLACE: Replaces files.
RMDIR: Removes a directory.
ROBOCOPY: Advanced utility to copy files and directory trees
SET: Displays, sets, or removes Windows environment variables.
SETLOCAL: Begins localization of environment changes in a batch file.
SC: Displays or configures services (background processes).
SCHTASKS: Schedules commands and programs to run on a computer.
SHIFT: Shifts the position of replaceable parameters in batch files.
SHUTDOWN: Allows proper local or remote shutdown of machine.
SORT: Sorts input.
START: Starts a separate window to run a specified program or command.
SUBST: Associates a path with a drive letter.
SYSTEMINFO: Displays machine specific properties and configuration.
TASKLIST: Displays all currently running tasks including services.
TASKKILL: Kill or stop a running process or application.
TIME: Displays or sets the system time.
TITLE: Sets the window title for a CMD.EXE session.
TREE: Graphically displays the directory structure of a drive or
path.
TYPE: Displays the contents of a text file.
VER: Displays the Windows version.
VERIFY: Tells Windows whether to verify that your files are written
correctly to a disk.
VOL: Displays a disk volume label and serial number.
XCOPY: Copies files and directory trees.
WMIC: Displays WMI information inside interactive command shell.