Wednesday 28 September 2011

Top 10 Server Technology Trends for the New Decade

Mobility and agility are the two key concepts for the new decade of computing innovation. At the epicenter of this new enabled computing trend is cloud computing. Virtualization and its highly scaled big brother, cloud computing, will change our technology-centered lives forever. These technologies will enable us to do more; more communicating, more learning, more global business and more computing with less — less money, less hardware, less data loss and less hassle. During this decade, everything you do in the way of technology will move to the data center, whether it's an on-premises data center or a remote cloud architecture data center thousands of miles away.

1. Mobile Computing
As more workers report to their virtual offices from remote locations, computer manufacturers must supply this new breed of on-the-go worker with sturdier products loaded with the ability to connect to, and use, any available type of Internet connectivity. Mobile users look for lightweight, durable, easy-to-use devices that "just work," with no lengthy or complex configuration and setup. This agility will come from these smart devices' ability to pull data from cloud-based applications. Your applications, your data and even your computing environment (formerly known as the operating system) will live comfortably in the cloud to allow for maximum mobility.

2. Virtualization
By the end of this decade, virtualization technology will touch every data center in the world. Companies of all sizes will either convert their physical infrastructures to virtual hosts and guests or they'll move to an entirely hosted virtual infrastructure. As more business owners attempt to extend their technology refresh cycle, virtualization's seductive money-saving promise brings new hope to stressed budgets as we collectively pull out of the recession. The global move to virtualization will also put pressure on computer manufacturers to deliver greener hardware for less green.

3. Cloud Computing
Cloud computing, closely tied to virtualization and mobile computing, is the technology that industry observers view as "marketing hype" or old technology repackaged for contemporary consumption. Beyond the hype and relabeling, savvy technology companies will leverage cloud computing to present their products and services to a global audience at a fraction of the cost of current offerings. Cloud computing also protects online ventures with an "always on" philosophy, guaranteeing their services will never suffer an outage. Entire business infrastructures will migrate to the cloud during this new decade, making every company a globally accessible one.

4. Web-based Applications
Heavy, locally installed applications will cease to exist by the end of the decade. This move will occur ahead of the move to virtual desktops. The future of client/server computing is server-based applications and client. Everything, including the client software, will remain on a remote server. Your client device (e.g., cell phone, computer, ebook reader) will call applications to itself much like the X Terminals of yesteryear.

5. Libraries
By the end of this decade, printed material will all but disappear in favor of its digital counterpart. Digitization of printed material will be the swan song for libraries, as all but the most valuable printed manuscripts will head to the world's recycling bins. Libraries, as we know them, will cease operation and likely reopen as book museums where schoolchildren will see how we used physical books back in the old days.

6. Open Source Migration
Why suffer under the weight of license fees when you can reclaim those lost dollars with a move to open source software? Companies that can't afford to throw away money on licensing fees will move to open source software including Linux, Apache, Tomcat, PostgreSQL and MariaDB. This decade will prove that the open source model works, and the proprietary software model does not.

7. Virtual Desktops
Virtual Desktop Infrastructure (VDI) has everyone's attention these days and will continue to hold it for the next few years as businesses move away from local desktop operating systems to virtual ones housed in data centers. This concept ties into mobile computing, virtualization and cloud computing. Desktops will likely reside in all three locations (PC, data center, cloud) for a few more years, but the transition will approach 100 percent non-local by the end of the decade. Moving away from localized desktop computing will result in lowering maintenance bills and alleviating much of the user error associated with desktop operating systems.

8. Internet Everywhere
You've heard of the Internet haven't you? Do you remember when it was known as The Information Superhighway and all of the discussions and predictions about how it would change our lives forever? The future is here and the predictions came true. The next step in the evolution of the Internet is to have it available everywhere: supermarket, service station, restaurant, bar, mall and automobile. Internet access will exist everywhere by the end of this new decade. Every piece of electronic gadgetry (yes, even your toaster) will have some sort of Internet connectivity due in part to the move to IPv6.

9. Online Storage
Currently, online storage is still a geek thing with limited appeal. So many of us have portable USB hard drives, flash drives and DVD burners that online storage is more of a luxury than a necessity. However, the approaching mobile computing tsunami will require you to have access to your data on any device with which you're working. Even the most portable storage device will prove unwieldy for the user who needs her data without fumbling with an external hard drive and USB cable. Much like cell phones and monthly minutes plans, new devices will come bundled with an allotment of online storage space.

10. Telephony
As dependence on cell phones increases, manufacturers will create new phones that will make the iPod look like a stone tool. They won't resemble current phones in appearance or function. You'll have one device that replaces your phone, your computer, your GPS and your ebook reader. Yet another paradigm shift brought about by the magic of cloud computing. Telephony, as we know it, will fall away into the cloud as Communication as a Service (CaaS). Moving communications to the data center with services such as Skype and other VoIP is a current reality, and large-scale migrations will soon follow.

Monday 26 September 2011

Asian tech firms face stiff challenge in mobile systems

SHANGHAI (Reuters) - Mobile operating systems developed by Asia's top technology firms will at best only chip away market share from dominant leaders Google's Android and Apple's iOS.

And such a scenario would only be possible if these companies, such as China's Baidu Inc and Alibaba Group are able to court enough application developers, analysts said.

"The challenge for Baidu, Alibaba, other firms looking to deploy their own mobile operating systems is how to develop and market one with sufficient appeal to users, app developers, device makers, and other parties," said Mark Natkin, Beijing-based technology consultant with Marbridge Consulting.

Baidu and Alibaba Group have launched mobile operating platforms named Baidu Yi and Aliyun respectively to capture a slice of the growing mobile Internet market.

Baidu is working with Dell to produce smartphones based on Yi, while Alibaba already has phones on sale running Aliyun.

China is home to more than 900 million mobile phone subscribers, the world's largest mobile phone market, but only about 10 percent are 3G users, highlighting the growth potential.

Media reports have said Taiwanese handset maker HTC Corp has expressed interest in buying a mobile OS, while Samsung Electronics, which is heavily focused on Android software, is expanding features available for smartphones running on its own operating system, Bada.

Android users have more than 100,000 Android applications to choose from, while Apple's App Store has more than 425,000 applications. In comparison, Samsung's Bada has access to 13,000 applications.

It is still early to determine the attractiveness of the platforms based solely on the number of applications available, but technology companies going into the space are keenly aware of the importance of attracting developers to their platform.

Earlier this month, Baidu's Chief Financial Officer told Reuters in an interview that her company was looking at acquisitions in the cloud computing space to support its push into the mobile arena.

"We will continue to improve our technology and if there are teams or are technologies that help us, that will naturally be our target," said Jennifer Li.

"Ultimately, we want to build a platform that is easily accessible - where Baidu services and where good applications can be available," Li said.

Alibaba will hold a conference this year to show developers how to create applications for its phone and discuss industry practices, a company spokeswoman said. Its OS, the Aliyun, currently has about 30 applications but its platform is compatible with Android applications.

LIMITED IMPACT

Many analysts are skeptical that the new mobile operating systems will be able overtake Android or Apple even in the long term.

Tuesday 20 September 2011

Chatty robots go viral on YouTube

"It was just an afternoon hack," said Hod Lipson, associate professor of mechanical and aerospace engineering. "It went viral in 24 hours and took us completely by surprise."

ipson asked Ph.D. students Igor Labutov and Jason Yosinski to set up the conversation as a demo for his class on artificial intelligence. They chose a Web-based chatbot (a computer program designed to simulate human conversation) called Cleverbot. Anyone can go to http://cleverbot.com and carry on a typed conversation with the robot. The students added text-to-speech capability and computer-generated faces, set two laptops side-by-side on a table and connected them to Cleverbot, seeding the conversation with a simple "Hi." They videotaped part of the conversation and posted it on YouTube. As of Aug. 31, it had more than a million hits.

The two avatars, one male and one female, start by exchanging pleasantries, but argue when one accuses the other of being a robot and then launch into a discussion of religion. Many chatbots work by repeating back what they hear in a slightly different form, but Cleverbot, developed by British artificial intelligence specialist Rollo Carpenter, draws on a vast database of phrases from all the conversations it has had in the past. Since it went live in 1997, Cleverbot has carried on more than 20 million conversations. That may explain why the male avatar says at one point "I am a unicorn." Apparently some human once said that.


"What makes this interesting is how people interpret what they see," said Lipson. Some, for instance, find "sexual tension" between the two characters. One viewer said the conversation was not like real human speech, but another countered that it was just like marriage. "The reaction is the real story," Lipson said.


Although this is not a typical subject for research in Lipson's Creative Machines Laboratory, the team is considering further exploration. Possibilities include conversations between three or more robots, or multiple robots and humans. Since Cleverbot learns from the conversations it has, what would happen if two robots continued to converse over a long period of time?


"Lying is possible," Yosinski noted.

"It's a blurring of the lines between humans and machines," Lipson said.

3M and IBM to develop new types of adhesives to create 3D semiconductors

3M and IBM announced that the two companies plan to jointly develop the first adhesives that can be used to package semiconductors into densely stacked silicon “towers.” The companies are aiming to create a new class of materials, which will make it possible to build, for the first time, commercial microprocessors composed of layers of up to 100 separate chips.

Such stacking would allow for dramatically higher levels of integration for information technology and consumer electronics applications. Processors could be tightly packed with memory and networking, for example, into a “brick” of silicon that would create a computer chip 1,000 times faster than today’s fastest microprocessor enabling more powerful smartphones, tablets, computers and gaming devices.


The companies’ work can potentially leapfrog today’s current attempts at stacking chips vertically – known as 3D packaging. The joint research tackles some of the thorniest technical issues underlying the industry’s move to true 3D chip forms. For example, new types of adhesives are needed that can efficiently conduct heat through a densely packed stack of chips and away from heat-sensitive components such as logic circuits.

“Today's chips, including those containing ‘3D’ transistors, are in fact 2D chips that are still very flat structures,” said Bernard Meyerson, VP of Research, IBM. “Our scientists are aiming to develop materials that will allow us to package tremendous amounts of computing power into a new form factor – a silicon ‘skyscraper.’ We believe we can advance the state-of-art in packaging, and create a new class of semiconductors that offer more speed and capabilities while they keep power usage low -- key requirements for many manufacturers, especially for makers of tablets and smartphones.”

Many types of semiconductors, including those for servers and games, today require packaging and bonding techniques that can only be applied to individual chips. 3M and IBM plan to develop adhesives that can be applied to silicon wafers, coating hundreds or even thousands of chips at a single time. Current processes are akin to frosting a cake slice-by-slice.


Under the agreement, IBM will draw on its expertise in creating unique semiconductor packaging processes, and 3M will provide its expertise in developing and manufacturing adhesive materials.

“Capitalizing on our joint know-how and industry experience, 3M looks forward to working alongside IBM – a leader in developing pioneering packaging for next-generation semiconductors,” said Herve Gindre, division vice president at 3M Electronics Markets Materials Division. “3M has worked with IBM for many years and this brings our relationship to a new level. We are very excited to be an integral part of the movement to build such revolutionary 3D packaging.”

Fujitsu develops compact silicon photonics light source for high-bandwidth CPU interconnects

Fujitsu Laboratories announced the development of a compact silicon photonics light source for use in optical transceivers required for optical interconnects capable of carrying large volumes of data at high speeds between CPUs. In the past, when the silicon photonics light sources built into optical transceivers, and the optical modulators that encode data into the light emitted from the light source experienced thermal fluctuations, a mismatch between the lasing wavelength of the light source and the operating wavelength of the modulator could arise, causing concern that the light would not carry information. This is why thermal control has become indispensable as a way to maintain operating wavelengths that consistently match. By introducing a mechanism that automatically keeps the light source's wavelength and the modulator's operating wavelength in sync, Fujitsu Laboratories has obviated the need for a thermal control mechanism, allowing for the device to be smaller and more energy efficient.

This technology enables compact, low-power optical transceivers to be mounted directly in CPU packaging. Through its application to optical interconnects between CPUs for exaflops-class supercomputers and high-end servers, the technology paves the way for super-high-speed computers.

Details of this technology were presented at the 8th Group IV Photonics international conference (GFP 2011), running September 14-16 in London.



In recent years, the performance of supercomputers has been roughly doubling every 18 months. Right now, work is underway to produce exaflops-class supercomputers with a target date around 2020. Realizing these ultrafast computers will require high speed, large-capacity interconnects that allow individual CPUs to transfer data to each other at tens of terabits per second. But the existing electrical interconnects based on copper wire are thought to be approaching their speed limit. This has spurred investigation into optical interconnects between CPUs.

Much effort has been devoted to research and development on optical transceivers, which use silicon photonics technology as a step toward implementing large-capacity optical interconnects between CPUs. The use of silicon photonics technology allows for the optical transceivers to be miniaturized and therefore, to be densely integrated near CPUs. Also, because they use silicon semiconductor manufacturing technologies, it should be possible to produce them in large volumes and at low costs.

The transmitter component of an optical transceiver comprises a light source and an optical modulator that encodes data into the light emitted by the light source. A good candidate for the optical modulator is a ring resonator, as it is compact and energy efficient. But because the optical transceiver is located near the CPU, the lasing wavelength and the operating wavelength of the ring-resonator-based optical modulator do not coincide with each other due to heat from the CPU, leading to information not to be encoded in the light. A thermal control mechanism is needed to ensure that they match exactly, which, however, is an obstacle to making the transceiver simple, compact and energy efficient.

Fujitsu Laboratories has developed the world's first compact silicon photonics light source that obviates the need for a thermal control mechanism. The light source is composed of a silicon mirror and a semiconductor optical amplifier (see red box, Figure 3). The silicon mirror, in turn, comprises a ring resonator and Bragg reflector (see blue box, Figure 3), controlling the lasing wavelength.



The sizes of the ring resonators introduced into both the light source and the optical modulator are exactly the same, so that shifts in the light source's lasing wavelength and the optical modulator's operating wavelength caused by CPU heat should match. This obviates the use in the thermal control mechanism that is otherwise required by the existing technology, enabling the optical transceiver to be made more compact and energy efficient. The transmitter component can be shrunk to less than 1 mm in length. Arraying these in a row should realize an optical transceiver for a large-capacity optical interconnect that is small enough to be mounted on a CPU module.

This technology is a stepping stone toward the future of exaflops-class supercomputers and high-end servers that use large-capacity optical interconnects on a large scale and with low energy requirements.

The First Plastic Computer Processor

Two recent developments—a plastic processor and printed memory—show that computing doesn't have to rely on inflexible silicon.

Silicon may underpin the computers that surround us, but the rigid inflexibility of the semiconductor means it cannot reach everywhere. The first computer processor and memory chips made out of plastic semiconductors suggest that, someday, nowhere will be out of bounds for computer power.

Researchers in Europe used 4,000 plastic, or organic, transistors to create the plastic microprocessor, which measures roughly two centimeters square and is built on top of flexible plastic foil. "Compared to using silicon, this has the advantage of lower price and that it can be flexible," says Jan Genoe at the IMEC nanotechnology center in Leuven, Belgium. Genoe and IMEC colleagues worked with researchers at the TNO research organization and display company Polymer Vision, both in the Netherlands.

The processor can so far run only one simple program of 16 instructions. The commands are hardcoded into a second foil etched with plastic circuits that can be connected to the processor to "load" the program. This allows the processor to calculate a running average of an incoming signal, something that a chip involved in processing the signal from a sensor might do, says Genoe. The chip runs at a speed of six hertz-on the order of a million times slower than a modern desktop machine-and can only process information in eight-bit chunks at most, compared to 128 bits for modern computer processors.

Organic transistors have already been used in certain LED displays and RFID tags, but have not been combined in such numbers, or used to make a processor of any kind. The microprocessor was presented at the ISSCC conference in San Jose, California, last month.



Making the processor begins with a 25-micrometer thick sheet of flexible plastic, "like what you might wrap your lunch with," says Genoe. A layer of gold electrodes are deposited on top, followed by an insulating layer of plastic, and the plastic semiconductors that make up the processor's 4,000 transistors. Those transistors were made by spinning the plastic foil to spread a drop of organic liquid into a thin, even layer. When the foil is heated gently the liquid converts into solid pentacene, a commonly used organic semiconductor. The pentacene layer was then etched using photolithography to make the final pattern for transistors.

In the future, such processors could be made more cheaply by printing the organic components like ink, says Genoe. "There are research groups working on roll-to-roll or sheet-to-sheet printing," he says, "but there is still some progress needed to make organic transistors at small sizes that aren't wobbly," meaning physically irregular. The best lab-scale printing methods so far can only deliver reliable transistors in the tens of micrometers, he says.

Creating a processor made from plastic transistors was a challenge, because unlike those made from ordered silicon crystals, not every one can be trusted to behave like any other. Plastic transistors each behave slightly differently because they are made up of amorphous collections of pentacene molecules. "You won't have two that are equal," says Geneo. "We had to study and simulate that variability to work out a design with the highest chance of behaving correctly."

The team succeeded, but that doesn't mean the stage is set for plastic processors to displace silicon ones in consumer computers. "Organic materials fundamentally limit the speed of operation," Genoe explains. He expects plastic processors to appear in places where silicon is barred by its cost or physical inflexibility. The lower cost of the organic materials used compared to conventional silicon should make the plastic approach around 10 times cheaper.

"You can imagine an organic gas sensor wrapped around a gas pipe to report on any leaks with a flexible microprocessor to clean up the noisy signal," he says. Plastic electronics could also allow disposable interactive displays to be built into packaging, for example for food, says Genoe. "You might press a button to have it add up the calories in the cookies you ate," he says.

But such applications will require more than just plastic processors, says Wei Zhang, who works on organic electronics at the University of Minnesota. At the same conference where the organic processor was unveiled, Zhang and colleagues presented the first printed organic memory of a type known as DRAM, which works alongside the processor in most computers for short-term data storage. The 24-millimeter-square memory array was made by building up several layers of organic "ink" squirted from a nozzle like an aerosol. It can store 64 bits of information.

Previous printed memory has been nonvolatile, meaning it holds data even when the power is off and isn't suitable for short-term storage involving frequent writing, reading, and rewriting, says Zhang. The Minnesota group was able to print DRAM because it devised a form of printed, organic transistor that uses an ion-rich gel for the insulating material that separates its electrodes.

The ions inside enable the gel layer to store more charge than a conventional, ion-free insulator. That addresses two problems that have limited organic memory development. The gel's charge-storing ability reduces the power needed to operate the transistor and memory built from it; it also enables the levels of charge used to represent 1 and 0 in the memory to be very distinct and to persist for as long as a minute without the need for the memory to be refreshed.

Organic, printed DRAM could be used for short-term storage of image frames in displays that are today made with printed organic LEDs, says Zhang. That would enable more devices to be made using printing methods and eliminate some silicon components, reducing costs.

Finding a way to combine organic microprocessors and memory could cut prices further, although Zhang says the two are not yet ready to connect. "These efforts are new techniques, so we cannot guarantee that they will be built and work together," says Zhang. "But in the future, it would make sense."

Full-duplex technology could double wireless capacity with no new towers

Earlier this year, Stanford University researchers created a -duplex radio that allowed wireless signals to be sent and received simultaneously, thereby doubling the speed of existing networks. Using the same approach, researchers at Rice University have now developed similar full-duplex technology that would effectively double the throughput on mobile networks without the addition of any extra towers.


Currently, mobile phones use two different frequencies to provide two-way communications - one to send transmissions and another to receive. This is because the strength of the transmission drowns out any incoming signal on the same frequency. While it was long thought impossible to overcome this problem, in 2010 Ashutosh Sabharwal, professor of electrical and computer engineering at Rice, and colleagues Melissa Duarte and Chris Dick, published a paper showing that full-duplex was possible.


The trick lay in canceling out the transmitted signal at the source so an incoming signal on the same frequency could still be heard. Like the Stanford approach, the technology developed by the Rice researchers achieves this by employing an extra antenna at the source.


"We send two signals such that they cancel each other at the receiving antenna - the device ears," Sabharwal said. "The canceling effect is purely local, so the other node can still hear what we're sending."


Many modern wireless communications standards, including 802.11n, 4G, LTE, WiMAX and HSPA+, use multiple antennas at both the transmitter and receiver to improve communications performance. This is called multiple-input and multiple-output, or MIMO, and this provided the researchers with a way to implement their full-duplex technology that is low cost and wouldn't require complex new radio hardware.


"Our solution requires minimal new hardware, both for mobile devices and for networks, which is why we've attracted the attention of just about every wireless company in the world," said Sabharwal. "The bigger change will be developing new wireless standards for full-duplex. I expect people may start seeing this when carriers upgrade to 4.5G or 5G networks in just a few years."


The full-duplex technology is set to be rolled into Rice's "wireless open-access research platform," or WARP. This is a collection of programmable processors, transmitters and other gadgets that make it possible for wireless researchers to test new ideas without building new hardware for each test. Stanford researchers actually used WARP in developing their full-duplex technology and Sabharwal says that adding full-duplex to WARP will allow other researchers to start innovating on top if Rice's breakthrough.


"There are groups that are already using WARP and our open-source software to compete with us," he said. "This is great because our vision for the WARP project is to enable never-before-possible research and to allow anyone to innovate freely with minimal startup effort."


The Rice University team has already gone one step further by achieving asynchronous full-duplex, which means one node can start receiving a signal when it's in the middle of transmitting. Sabharwal says his team is the first to demonstrate this technology, which would allow mobile carriers to further maximize traffic on their networks.

Researchers turn wastewater into “inexhaustible” source of hydrogen

Currently, the world economy and western society in general runs on fossil fuels. We've known for some time that this reliance on finite resources that are polluting the planet is unsustainable in the long term. This has led to the search for alternatives and hydrogen is one of the leading contenders. One of the problems is that hydrogen is an energy carrier, rather than an energy source. Pure hydrogen doesn't occur naturally and it takes energy - usually generated by fossil fuels - to manufacture it. Now researchers at Pennsylvania State University have developed a way to produce hydrogen that uses no grid electricity and is carbon neutral and could be used anyplace that there is wastewater near sea water. The researchers' work revolves around microbial electrolysis cells (MECs) - a technology related to microbial fuel cells (MFCs), which produce an electric current from the microbial decomposition of organic compounds. MECs partially reverse this process to generate hydrogen (or methane) from organic material but they require the some electrical input to do so.

Instead of relying on the grid to provide the electricity required for their MECs, Bruce E. Logan, Kappe Professor of Environmental Engineering, and postdoctoral fellow Younggy Kim, turned to reverse-electrodialysis (RED). We've previously looked at efforts to use RED to generate electricity using salt water from the North Sea and fresh water from the Rhine and the Penn State team's work follows the same principle - extracting energy from the ionic differences between salt water and fresh water.

A RED stack consists of alternating positive and negative ion exchange membranes, with each RED contributing additively to the electrical output. Logan says that using RED stacks to generate electricity has been proposed before but, because they are trying to drive an unfavorable reaction, many membrane pairs are required. To split water into hydrogen and oxygen using RED technology requires 1.8 volts, which would require about 25 pairs of membranes, resulting in increased pumping resistance.

But by combining RED technology with exoelectrogenic bacteria - bacteria that consume organic material and produce an electric current - the researchers were able to reduce the number of RED stacks required to five membrane pairs.

Previous work with MECs showed that, by themselves, they could produce about 0.3 volts of electricity, but not the 0.414 volts needed to generate hydrogen in these fuel cells. Adding less than 0.2 volts of outside electricity released the hydrogen. Now, by incorporating 11 membranes - five membrane pairs that produce about 0.5 volts - the cells produce hydrogen.

"The added voltage that we need is a lot less than the 1.8 volts necessary to hydrolyze water," said Logan. "Biodegradable liquids and cellulose waste are abundant and with no energy in and hydrogen out we can get rid of wastewater and by-products. This could be an inexhaustible source of energy."

While Logan and Kim used platinum as the catalyst on the cathode in their initial experiments, subsequent experimentation showed that a non-precious metal catalyst, molybdenum sulfide, had 51 percent energy efficiency.

The Penn State researchers say their results, which are published in the Sept. 19 issue of the Proceedings of the National Academy of Sciences, "show that pure hydrogen gas can efficiently be produced from virtually limitless supplies of seawater and river water and biodegradable organic matter."

Most Powerful Voices in Security

The security community has a growing number of influential and important people, especially as the industry rises to meet the need to address more advanced security threats, such as targeted attacks. But how does a company in the security industry truly identify the influential people? And then once identified, how does one use influential voices to help promote their brand? In this study, we answer the first question - how to identify the most powerful voices in your industry, focusing on the security space, and as part of this we provide you a list of people to follow for the best, most up to date information, and who have the loudest voices to help help carry some of your key messages. In a future study, we will discuss how to further exploit that knowledge to market your brand.

As executives in a fast-changing and social world, many of us struggle with the ability to have our voices heard by our target customers, especially as news in our industry is gaining more attention (e.g. a "hot space"). You would think that if you were a part of an emerging category, that people would pay attention to you. However, getting above the "noise" is a problem for some companies.

Until now we've found ourselves using traditional and often ineffective marketing and sales tools. With firms like Radian6, Eloqua, Marketo and the like, CMOs are being presented with new ways of leveraging social networks to understand, target, and reach their markets.

According to leading researchers, some individuals in your target industry have greater influence than others, holding a virtual megaphone powered by their social graph. The term "social graph," coined a few years ago by Facebook CEO, Mark Zuckerberg, is also referred to as the "open graph," and is used to describe an aggressive initiative to connect the dots between the relationships and associations built on Google+, Facebook, Twitter, Linkedin, Foursquare, other public social networking services, and emerging private enterprise social networks like Salesforce's Chatter, Yammer, and others. Emerging companies like Klout also use the open graph to measure the number of people you reach, how much those people amplify your message, and ultimately the strength of your network.

When you look at established industries like Security, more well-known people, like executives of incumbent security companies, are considered the influencers, while others who are less known exist in niches in the blogosphere or in newly formed circles. Examples of niche groups might include the Cloud Security Alliance, or U.S. congressman Mac Thornberry's Cybersecurity Task Force. You can argue that some people in these niche groups might not even be considered security "experts" or "thought leaders". However, by being associated with an area which is highly visible from a security perspective (e.g. cloud), their voices can still carry significant weight.

Our thesis is that these smaller groups in security can have the most powerful voices. Collectively; however, ALL these groups consist of a number of the most vocal, most followed and re-posted commentators in the security community today. If you are involved in security (as a new startup or an established player), there are a select number of people you need to know.

In compiling our ranking of the Most Powerful Voices ("MPV") in security, we took advantage of concepts similar to Google PageRank for people, working with researchers and thought leaders such as Mark Fidelman (see "The Most Powerful Voices in Open Source").

The metrics needed to measure both broadcast power and profundity were identified through a number of studies performed across several industry categories. Although there have been many advancements in the area of social marketing, the work presented here still requires techniques not yet offered by any single social graph tool available today.

The MPV formula is based on "reach" by examining the number of followers and buzz an individual has on sites like Google and Twitter. We then determine how much impact an individual has with their followers and subscribers. We ask questions like: If you have a twitter account, how often are you uniquely referenced, or retweeted? How much buzz is created around your blog posts, tweets, Quora answers, Linkedin groups, and other messages? How often is an individual referenced in the blogosphere?

Top Executive Voices in Security
The MPV formula illustrates how much additional broadcast power an individual has versus an average active person (defined later). For example, Eugene Kaspersky, CEO of Kaspersky Lab, has 5,035 times more broadcast power reach than the average person, while Enrique Salem, CEO of Symantec, has a respectable 855 times more broadcast power than the average person. At the surface, security executives are good targets when searching for powerful voices. However, most, if not all, powerful executives are governed or constrained on what they can say. You won't find CEOs of publicly traded companies providing transparent dialog about their opinions on controversial topics (although Leo Apotheker, CEO at HP, may prove me wrong on one or both of these points). In addition, it's quite difficult to get executives to speak on your particular topic, or about your brand. [Note: We included Ex-CEO from McAfee, David DeWalt, because we assume we'll hear of his next high-profile placement and we can update the company then].

Top Media/Blogger Voices in Security
Then there's the power of active security bloggers like Bruce Schneier ("Schneier on Security") who has a voice which is 8,252 times the average. Yes, that's more than Eugene Kaspersky! Why? Because he's willing to speak his mind on topics where people want transparent and insightful perspective. Also, a dialog can occur between the average person and a blogger. It's easier to reach even the most well known bloggers or editors of news and media properties.

Top Voices in Cloud and Security
We looked at the top 100 voices in cloud computing and searched for those discussing security. Some voices were found to be as high as 5,700 times the average person. As an example, Reuven Cohen, founder and CTO of Enomaly, may not be solely focused on the security industry, but security is the number one issue when it comes to cloud adoption. So why is Reuven's voice stronger than Eugene Kaspersky? We speculate that this is based on the fact that Reuven is a very ungoverned and vocal voice at an early-stage startup, and that the audience for these voices may assume that startups generally help define the trends and direction of the industry.

Top Government Leaders and Security
We debated whether to include government officials due to their more general public following. Government leaders have a much different audience than those following security executives. However, many government officials are actively involved in security. For example, Susan Collins, who is a ranking member of the Homeland security and Governmental Affairs Senate Committee, is a co-author of comprehensive cybersecurity legislation, which resulted in much debate in prominent media outlets such as Forbes and the Washington Post.

In addition, as we searched for people who are addressing topics in cyber security, we found people such as U.S. Representative for California's 49th congressional district, Darrell Issa and, of course, the 30th Deputy Secretary of Defense, William Lynn III, who currently maintain voices 31,195 and 25,935 times that of the average person, respectively.

Therefore, we ultimately decided to include government officials because when they communicate they generate a lot of attention.

Chief Information Security Officers
Our survey of over 100 CISOs resulted in the top 10 voices exceeding 1,300 times that of the average person (e.g. See Mandiant CSO, Richard Bejtlich, and Facebook CISO, Joe Sullivan). CISOs or CSOs are prominent figures in the enterprise now. With the rise of advanced persistent threats (APTs), these executives are under growing pressure to lock down their company's intellectual property. In our recent discussions with several Fortune 100 CISOs, some believe there are several APTs lying dormant and undetected in their enterprise today. Look at the recent example of a highly sophisticated and targeted attack on Google's corporate infrastructure originating from China that resulted in the theft of intellectual property back in early 2010.

Therefore, when CISOs transparently talk about their findings (which may not happen often due to security reasons!), people will listen (see Yahoo! CISO, Justin Somaini's, survey on Information Security Function, Governance and Risk Management, Culture and Communication, Metrics and KPI's).

Security Analysts
Lastly, we surveyed over 75 of the top security analysts with the top 10 having voices which ranged from 347 to 710 times the average person. This is no surprise when you see analysts like Gartner's Neil MacDonald openly discussing sensitive topics like what RSA did wrong following the SecurID breach earlier this year.


4G LTE Network Expands with Addition of Daytona Beach

Fastest, Most Advanced Wireless Service Now Covers Most Major Florida Markets

Verizon Wireless has added the Daytona Beach area to its extensive and quickly growing list of markets with the nation's fastest and most advanced 4G LTE (Fourth Generation/Long-Term Evolution) wireless network.

In addition to Daytona Beach, Verizon's 4G LTE network in Florida now includes Miami, Tampa Bay, Fort Lauderdale, West Palm Beach, Orlando, Jacksonville, Tallahassee, Gainesville, Sarasota-Bradenton and Lakeland, plus numerous airports throughout the state.

Nationally, the Verizon Wireless' 4G LTE network already is available in 143 cities, covering more than 160 million Americans or half the U.S. population. Verizon Wireless expects to complete its 4G LTE wireless network across the country and throughout Florida by 2013.

4G LTE will provide area wireless consumers access to services up to 10 times faster than the company's industry-leading 3G network. With this latest progression of wireless technology, the company expects average data rates in real-world, loaded network environments to be 5 to 12 megabits per second (Mbps) on the downlink and 2 to 5 Mbps on the uplink.

These speeds will allow for smooth mobile video-conferencing, multiple simultaneous video streams, speedy transfer of large files, downloading and running of powerful programs, and numerous other applications to improve efficiency and productivity wirelessly.

The Verizon Wireless 4G LTE technology also provides improved coverage strength inside buildings, as it runs on a powerful 700 megahertz wireless frequency.

"This latest launch in Daytona Beach gives local consumers access to the most advanced technologies in the wireless industry," said Pam Tope, Florida region president of Verizon Wireless. "We're pleased to further build on the head start we have over our competitors in providing these services to customers on the fastest and most advanced 4G LTE network, here in Florida and across the nation."

Devices and Plan Pricing

With many more to come, Florida customers currently can choose from 11 devices to access the blazingly fast speeds of the 4G LTE network, including:


•Smartphones: DROID BIONIC™ by Motorola, Revolution™ by LG, DROID Charge by Samsung and ThunderBolt™ by HTC
•Tablets: Samsung Galaxy Tab™ 10.1 with 4G LTE
•Notebooks and Netbooks: HP® Pavilion dm1-3010nr Entertainment PC and Compaq™ Mini CQ10-688nr with built-in 4G LTE
•Hotspots: Verizon Wireless 4G LTE Mobile Hotspot MiFi™ 4510L and Samsung 4G LTE Mobile Hotspot SCH-LC11
•Modems: Verizon Wireless 4G LTE USB Modem 551L and Pantech UML290 USB modem

Verizon Wireless smartphone customers will need to subscribe to a Verizon Wireless Nationwide Talk plan beginning at $39.99 for 450 minutes per month, and a smartphone data plan starting at $30 monthly access for 2 GB of data. Verizon Wireless customers can choose from the following data plans for their 4G LTE devices.

Extending Coverage and Reliability

Verizon Wireless is also working with rural communications companies to collaboratively build and operate a 4G LTE network in those areas using the tower and backhaul assets of the rural company combined with Verizon's core 4G LTE equipment and premium 700 MHz spectrum. To date, 11 rural carriers have announced their participation in the LTE in Rural America program and have leased spectrum covering more than 2.1 million people in rural communities and nearly 72,000 square miles.

When customers travel outside of a 4G LTE coverage area, the devices automatically connect to Verizon Wireless' 3G network, enabling customers to stay connected from coast to coast. Verizon Wireless' 3G network is the largest, most reliable network in the country and allows customers in 3G coverage areas who purchase 4G LTE devices today to take advantage of 4G LTE speeds when the faster network becomes available in their city.

For more details on the Verizon Wireless' 4G LTE network, visit www.verizonwireless.com/lte.

About Verizon Wireless

Verizon Wireless operates the nation's fastest, most advanced 4G network and largest, most reliable 3G network. The company serves 106.3 million total wireless connections, including 89.7 million retail customers. Headquartered in Basking Ridge, N.J., with 83,000 employees nationwide, Verizon Wireless is a joint venture of Verizon Communications (NYSE, NASDAQ: VZ) and Vodafone (LSE, NASDAQ: VOD). For more information, visit www.verizonwireless.com. To preview and request broadcast-quality video footage and high-resolution stills of Verizon Wireless operations, log on to the Verizon Wireless Multimedia Library at www.verizonwireless.com/multimedia.

SOURCE Verizon Wireless



Read more: http://www.sacbee.com/2011/09/19/3922137/4g-lte-network-expands-with-addition.html#ixzz1YU3Gxwqy

How to See Who Views Your Facebook Profile

By now, you've most likely seen the many apps and Web services claiming to let you see who's viewing your Facebook profile. Is your college ex checking up on you? Is someone from work scrolling through pictures of your beach vacation? Are your parents secretly peeking in on your private life? These apps promise the answers.

Unfortunately, they don't deliver. Not a single one of them.

You can be 100 percent certain that each and every app that says "See who views your profile!" or "stalker tracker" or anything else like that is virus-laden junk. These apps would like you to cough up your Facebook password, or they might install the ability to spam your friends via your Facebook wall.

Again: Be extremely wary of any service or app that claims to show you who's been viewing your Facebook profile. This functionality violates Facebook's privacy rules. If you've fallen prey to a purported stalker app (or any other type of Facebook malware), be sure to check out Facebook's instructions for revoking app access to your account.

That said, there are a couple ways to get clues and insights into who's been floating around your profile. While you won't end up with the definitive list you're likely looking for, keep reading for tips and tricks that toe -- but don't cross! -- that fine line between natural curiosity and a massive breach of privacy.

HP ProLiant BL2x220c G7 Server series - Models

QuickSpecs
See detailed specs
US QuickSpecs » html » pdf

Processor
Processor family
Intel® Xeon® 5600 series

Number of processors
2

Processor core available
6 or 4

Memory
Maximum memory
96 GB

Memory slots
6 DIMM slots

Memory type
PC3-10600 DDR3 RDIMMs and UDIMMs

I/O
Network Controller
(1) 1GbE NC362i 2 Ports
(1) 10GbE NC543i Flex-10/QDR IB 1 Port

Storage
Maximum drive bays
(1) SFF SATA/SSD

Supported drives
Non-hot plug SFF SATA
Non-hot plug SATA SSD

Storage Controller
(1) Integrated SATA

Deployment
Form Factor (fully configured)
32 server nodes per 10U enclosure

Infrastructure management
HP iLO 3 Standard for BladeSystem and HP Insight Foundation

Warranty - year(s) (parts/labor/onsite)
3/0/0

PowerEdge T710 Tower Server

Processor
Intel® Xeon® processor 5500 and 5600 series
Six-core Intel® Xeon®
Quad-core Intel® Xeon®
Operating System
Microsoft® Windows® Small Business Server 2011
Microsoft® Windows® Small Business Server 2008
Microsoft® Windows Server® 2008 SP2, x86/x64 (x64 includes Hyper-V™)
Microsoft® Windows Server® 2008 R2 SP1, x64 (includes Hyper-V™ v2)2
Microsoft® Windows® HPC Server 2008
Novell® SUSE® Linux®Enterprise Server
Red Hat® Enterprise Linux®

Virtualization Options:
Citrix® XenServer™
VMware® vSphere™ 4.1 (including VMware ESX® 4.1 or VMware ESXi™ 4.1)
For more information on the specific versions and additions, visit www.dell.com/OSsupport.
Chipset
Intel™ 5520
Memory2
Up to 192GB (18 DIMM slots): 1GB/2GB/4GB/8GB/16GB DDR3
Up to 1333Mhz
Embedded Hypervisor (Optional)
Microsoft® Hyper-V™ via Microsoft® Windows Server® 2008
VMware® vSphere™ 4.1 (including VMware ESX® 4.1 or VMware ESXi™ 4.1)
Storage

Hot-plug Hard Drive Options:
2.5" SAS SSD, SATA SSD, SAS (15K, 10K), nearline SAS (7.2K), SATA (7.2K)
3.5" SAS (15K, 10K), nearline SAS (7.2K), SATA (7.2K)

Maximum Internal Storage:
Up to 24TB

External Storage:
For information about Dell external storage options, visit Dell.com/Storage.
Drive Bays

Hot-Swap options available:
Up to eight 3.5" SAS or SATA drives
Up to sixteen 2.5" SAS, SATA or SSD drives
Slots
6 PCIe G2 slots + 1 storage slot:

One x16 slot
Four x8 slots
One x4 slot
One x8 Storage slot
RAID Controllers

Internal:
PERC H200 (6Gb/s)
PERC H700 (6Gb/s) with 512MB battery-backed cache; 512MB, 1GB Non-Volatile battery-backed cache
SAS 6/iR
PERC 6/i with 256MB battery-backed cache
PERC S100 (software based)
PERC S300 (software based)

External:
PERC H800 (6Gb/s) with 512MB of battery-backed cache; 512MB, 1GB Non-Volatile battery-backed cache
PERC 6/E with 256MB or 512MB of battery-backed cache

External HBAs (non-RAID):

6Gbps SAS HBA
SAS 5/E HBA
LSI2032 PCIe SCSI HBA
Network Controller
2 dual port embedded Broadcom® NetXtreme IITM 5709c Gigabit Ethernet NIC with failover and load balancing.
Optional 1GBe and 10GBe add-in NICs
Communications

Optional add-in NICs:
Dual Port 10GB Enhanced Intel Ethernet Server Adapter X520-DA2 (FcoE Ready for Future Enablement)
Intel PRO/1000 PT Dual Port Server Adapter, Gigabit, Copper, PCI-E x4
Intel PRO/1000 VT Quad Port Server Adapter, Gigabit, Copper, PCI-E x8
Intel 10GBase-T Copper Single Port NIC, PCI-E x84
Intel Single Port Server Adapter, 10Gigabit, SR Optical, PCI-E x8
Intel OPLIN 10G SFP+ copper dual port NIC PCI-E x8
Intel® Ethernet X520 DA2 Dual-Port 10 Gigabit Server Adapter
Broadcom® BMC57710 10Base-T Copper Single Port NIC, PCI-E x8
Broadcom® BMC5709C IPV6 Gigabit Copper Dual Port NIC with TOE and iSCSI Offload, PCI-E x4
Broadcom® BMC5709C IPV6 Gigabit Copper Dual Port NIC with TOE, PCI-E x4
Broadcom® NetXtreme® II 57711 Dual Port Direct Attach 10Gb Ethernet PCI-Express Network Interface Card with TOE and iSCSI Offload
Intel® Gigabit ET Dual Port Server Adapter
Intel® Gigabit ET Quad Port Server Adapter
Brocade CNA Dual-port adapter
Emulex® CNA iSCSI HBA stand up adapter OCE10102-IX-D
Emulex® CNA iSCSI HBA stand up adapter OCE10102-FX-D

Optional add in HBAs:
Qlogic® QLE 2462 FC4 Dual Port 4 Gbps Fiber Channel HBA
Qlogic® QLE 220 FC4 Single Port 4 Gbps Fiber Channel HBA
Qlogic® QLE 2460 FC4 Single Port 4 Gbps Fiber Channel HBA
Qlogic® QLE2562 FC8 Dual-channel HBA, PCI-E Gen 2 x4
Qlogic® QLE2560 FC8 Single-channel HBA, PCI-E Gen 2 x4
Emulex® LPe-1150 FC4 Single Port 4 Gbps Fiber Channel HBA, PCI-E x4
Emulex® LPe-11002 FC4 Dual Port 4 Gbps Fiber Channel HBA, PCI-E x4
Emulex® LPe-12000, FC8 Single Port 4 Gbps Fiber Channel HBA, PCI-E Gen 2 x4
Emulex® LPe-12002, FC8 Dual Port 4 Gbps Fiber Channel HBA, PCI-E Gen 2 x4
Brocade FC4 and 8 GB HBAs
Power
2 Hot-plug redundant PSUs. 750 Watts, 1100 Watts. Voltage range 100 - 240VAC, 50/60Hz

UPS (Uninterruptible Power supplies):

500W – 2700W
Extended Battery Module (EBM)
Network Management Card
Availability
ECC DDR3 memory, hot-plug hard drives; optional hot-plug redundant power supplies; dual embedded NICs with failover and load -balancing support (4 total ports) ; optional PERC6/i integrated daughter card controller with battery-backed cache; hot-plug redundant cooling; tool-less chassis; fibre and SAS cluster support; validated for Dell/EMC SAN
Video Card
Integrated Matrox G200
Chassis
T710 Tower or 5U rack-mountable Server
Height: 46.63cm (18.4")
Length: 73.18cm (28.9") (overall including bezel)
Width: 21.79cm (8.6")
Weight (maximum config): 35.3kg (78.0lb)

Rack Support
ReadyRails™ sliding rails for 4-post Racks:

Support tool-less installation in 19” EIA-310-E compliant square or unthreaded round hole 4-post racks including all Dell 42xx & 24xx racks
NOTE: Threaded 4-post racks and 2-post racks require 3rd party conversion kits or fixed shelves available through Dell Software & Peripherals

Support full extension of the system out of the rack to allow serviceability of key internal components
Support optional cable management arm (CMA)
Rail depth without the CMA: 760 mm
Rail depth with the CMA: 840 mm
Square-hole rack adjustment range: 692-756 mm
Round-hole rack adjustment range: 678-749 mm
1U/2U Rail Attachment Brackets
Management
Dell™ OpenManage™
iDRAC6, Optional iDRAC6 Enterprise

BladeCenter S Chassis

HighlightsIntegrates servers, SAN storage, networking, I/O and applications into a single chassis
Uses standard office power plugs with 100 – 240 V, so you do not need a data center to take command of your data
Featuring the BladeCenter Start Now Advisor, making it easy to set up servers, SAN storage, network switches and SAN switches, all from a single console
Flexible modular technology integrates Intel®, AMD Opteron™, or POWER™ processor-based blade servers supporting a wide range of operating systemsComes with management tools that are open and easily integrated, allowing you to focus on your business, not your IT
Helps build greener IT infrastructures with powerful IBM Cool Blue™ technology, and a portfolio of products and tools to help customers plan, manage and control power and cooling.
BladeCenter S offers a broad range of storage and networking options integrated into the chassis to simplify infrastructure complexity and manageability while helping lower total cost of ownership.

The BladeCenter S Office Enablement Kit is the ideal way to deploy BladeCenter S into your everyday office. The kit enables several office-friendly features such as the built-in Acoustical Module, a front locking door, 33 percent or 4U available for expansion or for your other IT and optional contaminants filter.

Supermicro 6-Core MP Series


The flagship 6-Core quad-processor SuperServer 8000 Series boasts the highest performance per watt in the industry. Featuring the Intel® Xeon® Processor 7400 Series and supporting up to 192GB of fully buffered DDR2 667 or 533 MHz memory via 24 DIMM slots, the SuperServer 8000 Series is designed to specialize in virtualization with an expanded memory capacity capable of boosting performance for a wide range of applications.


1U

Key Features

1. Quad Intel® 64-bit Xeon® MP
Support 1066 MHz FSB
2. Intel® 7300 (Clarksboro) Chipset
3. Up to 192GB DDR2 ECC FB-DIMM
(Fully Buffered DIMM)
4. Intel® 82575EB Dual-port
Gigabit Ethernet Controller
5. 6x SATA (3 Gbps) Ports via ESB2
Controller
6. 1 (x8) PCI-e (using X16 slot),
1 (x8) PCI-e (using x8 slot) &
1 (x4) PCI-e (using x8 slot)
1x 64-bit 133MHz PCI-X
7. ATI ES1000 Graphics with
32MB video memory
8। IPMI 2.0 (SIMSO) Slot






IBM's New Generation of Intel(R)-based Servers, Software Cuts Through Data Center Chaos



Less Expensive to Run and Simpler to Manage, IBM Launches New Generation of x86 System x Racks, Blades, iDataPlex Technology, and Management Software to Create a Dynamic Infrastructure

ARMONK, NY - 30 Mar 2009: IBM (NYSE:IBM) today unveiled a new generation of Intel Xeon processor 5500 series-based System x(TM) servers and software that enable customers to more easily roll out virtualized computing and significantly reduce growing operating costs with higher performance, simplified management and increased utilization.

With the new systems, IBM engineers addressed key challenges in today's data center, where hefty costs for power usage and IT management pile-up, while processors sit idle or under-utilized. To help enable a more dynamic infrastructure, IBM's four new x86 rack servers and blades feature unique designs -- such as lower wattage requirements -- that can slash energy costs up to 93 percent.(1) At the same time, the new System x servers boast double the compute performance in some models, and support more memory, storage and I/O to help customers of all sizes ease the transition to highly efficient virtualized computing resources. System x blades and racks lead the industry with 96GB to 1TB memory options.

"The world is going through changes that require IT professionals in every industry to consolidate, virtualize and support a variety of different platforms -- a mix of operating systems, hardware, middleware and applications. And there is no one-size-fits-all solution for most businesses," said Adalio Sanchez, general manager, IBM System x. "Not only do these announcements continue our strong commitment to invest in and deliver leading x86 servers that address our customer's needs, System x supports multiple architectures and is designed to lower ownership costs and enable new paradigms such as Cloud computing."

In addition to hardware innovations, IBM announced new management software to complement Systems Director 6.1, which enables clients to automatically manage virtual and physical assets across platforms. IBM Power Systems, System z, storage and non-IBM x86 servers are all supported, with a potential cost savings in management costs of up to 44 percent.

"VMware and IBM have worked closely together for many years to leverage each others' expertise to increase IT efficiency, control and choice for our customers," said Brian Byun, vice president of global alliances at VMware. "With unique scale-up capacity supporting up to 96 cores and the ability to use up to 1 TB of memory, IBM's System x servers complement VMware's upcoming next-generation VMware vSphere family of products and are an excellent choice for customers deploying private cloud environments."

New IBM Servers: Lower Ownership Costs Through Innovation
This new generation of System x technology maximizes the power and performance with a new generation of intelligent server processors -- the Intel® Xeon® Processor 5500 series.

IBM BladeCenter HS22
A no-compromise blade with breakneck speed, the two-socket IBM BladeCenter HS22 offers outstanding performance, flexible configuration options and simple management in an efficient server designed to run a broad range of workloads. Among other features, the completely revamped HS22 features three times as much memory as its predecessor, which allows the HS22 to process twice as many transactions per minute. In fact, customers can achieve as high as a 11-to-1 consolidation ratio migrating older rack and blade servers onto HS22 blades, while saving over 93 percent in energy costs alone.(1) In addition, two hot-swap internal storage bays offer customers a choice of SAS, SATA, or solid state options. The HS22 offers best-in-class reliability and is fully compatible with all BladeCenter enterprise and office chassis, protecting the investments customers have already made in BladeCenter.

IBM System x3650 M2 and x3550 M2
Built with all-new, energy-smart designs to simplify power distribution and reduce energy loss, these two-socket enterprise servers feature lower wattage, highly efficient power supplies exceeding the "80 Plus Gold" standard, counter-rotating fans, altimeters, and advanced power management. These innovations translate into reducing energy consumption and lowering annual energy costs for a single enterprise class data center.(2) They also feature outstanding performance delivering computing speeds up to 6.4GT/s, design redundancy and unique security options.

IBM System x iDataPlex dx360 M2
The iDataPlex dx360 M2 is specifically designed for data centers that require high performance, yet are constrained on floor space, power and cooling infrastructure. The dx360 M2 provides up to five times the compute density versus 1U rack servers in the data center, and can cool the data center 70 percent more efficiently with the Rear Door Heat Exchanger. One of the new generation of System x and BladeCenter servers, the dx360 M2 significantly reduces operational costs, is simple to manage, and is pre-integrated for rapid scaling to solve today's IT business challenges.

As testament to the computing power of the new iDataPlex dx360 M2, the University of Toronto's SciNet Consortium will be using the new system, along with IBM's advanced POWER6 architecture, to build a system capable of performing 360 trillion calculations per second. The supercomputer will pioneer an innovative hybrid design containing two systems that can work together or independently, connected to a massive five petabyte storage complex. Because it is a hybrid using IBM's highly efficient iDataPlex system and POWER6 architecture, the system will be extremely flexible, capable of running a wide range of software at a high level of performance.

"IBM innovations in the server space allow us to scale our business rapidly, as needed," said Don Goodwin, Latisys executive vice president, sales and marketing, Fairfax, Va., a leading high-density co-location and managed hosting company using a mix of IBM blades and iDataPlex scale-out servers. "The mix of formats provided by System x allows us to support a wide range of applications and more easily move our enterprise and web hosting offerings into new markets."

New Levels of Performance
IBM's new-generation x86 servers deliver new levels of performance and efficiency for the enterprise. The System x3650 M2 has posted leadership 2-processor results for SPECPower_ssj2008, the two-tier SAP SD Standard Application Benchmark and for the VMware VMmark virtualization benchmark.(3) In addition, the IBM BladeCenter HS22 has posted a leadership 2-processor result for SPECjbb2005.(4)

To complement the complete line up of rack, blade and iDataPlex offerings, IBM will also be releasing new business optimized tower solutions in 2Q. These towers will meet the challenges of running high performance IT in the desk side space where security, serviceability, ease of use and reliability are critical. These new systems make the most of the new features and performance of the new Intel processor family.

Systems Management Leadership
IBM today also announced key system management upgrades designed to complement its new generation of x86 systems and help IT managers orchestrate the unique workload demands of modern businesses.

IBM Systems Director 6.1
The new IBM Systems Director 6.1 provides powerful tools for managing both physical and virtual resources and features an easy-to-use, web-based interface with integrated wizards/tutorials, such as Systems Director Active Energy Manager. It delivers broad cross-platform support including IBM Power Systems, System z, storage and non-IBM x86 servers, with a potential cost savings that can reach 34.5 percent for Windows servers and 43.8 percent for Linux x86 servers.

Unified Extensible Firmware Interface (UEFI)
IBM is offering next-generation BIOS -- the Unified Extensible Firmware Interface (UEFI) -- to provide a consistent BIOS across its portfolio and to allow for more detailed remote-configuration options.

Integrated Management Module (IMM)
The Integrated Management Module (IMM) combines diagnostics, virtual presence and remote control to manage, monitor, troubleshoot and repair from any corner of the world. Its standards-based alerting also enables "out-of-the-box" integration into enterprise management environments and provides a single firmware stack for all IMM based systems.

IBM ToolsCenter
The IBM ToolsCenter initiative simplifies the acquisition and usage of single system management tools with a single webpage to acquire tools. It establishes a common look and feel across the entire tool set, which maximizes efficiency and reduces training cost. The latest addition to the ToolsCenter portfolio, IBM bootable media creator is used to create custom bootable media (CD, DVD or USB key) with updates for clients' systems.

Express Models for Midsize Businesses
IBM is also introducing "Express" models of three systems that are designed for midsize companies. Two rack-mounted servers, the System x3650 M2 Express and x3550 M2 Express, deliver twice the virtualization performance and use up to 60 percent less power than previous generations. The new IBM BladeCenter HS22 Express is a versatile, easy-to-use blade optimized for performance and energy efficiency. The HS22 is two times faster than previous IBM blades and offers best-in-class reliability, availability and serviceability.

About IBM
For more information about this new generation of IBM System x, BladeCenter and iDataPlex products, availability and support, visit http://www.ibm.com/systems/x/newgeneration.

1. IBM Power Engineering Study, Feb '09

2. Based on IBM actual, public results on HS22 & Intel internal analysis. 1U rack server configuration: 2S 1C Xeon
(3.8GHz 2MB cache) with 8x 1GB memory and 1 HDD -- total power: 382W under load, HS22 blade server configuration: 2S 4C Xeon X5570 (2.93GHz 8MB cache) with 6x 2GB memory and 1 SSD -- total power with chassis burden = 307W under load, SPECjbb2005 = 604,417 bops, 151,104 bops/JVM. SPEC, SPECjbb2005 are trademarks of the Standard Performance Evaluation Corporation (SPEC).

3. IBM System x3650 M2: 5,100 SAP SD Benchmark users, 1.98 seconds average dialog response time, 25,530 SAPS, measured throughput of 1,532,000 dialog steps per hour (or 510,670 fully processed line items per hour), and an average CPU utilization of 99 percent for the central server. Configuration: two Intel Xeon X5570 processors, 2.93GHz with 256KB L2 cache per core and 8MB L3 cache per processor (2 processors/8 cores/16 threads), 48GB of memory, 64-bit DB2 9.5, Microsoft Windows Server 2003 Enterprise x64 Edition, and SAP ERP 6.0 (certification number 2008079). Results published December 19, 2008; results referenced current as of March 30, 2009 (http://www.sap.com/benchmark). SAP and all SAP logos are trademarks or registered trademarks of SAP AG in Germany and in several other countries.

IBM System x3650 M2 server achieved a Performance to Power Ratio of 1,860 overall ssj_ops/watt on the SPECpower_ssj2008 benchmark with the Quad-Core Intel(R) Xeon(R) Processor X5570 (2.93GHz, 256KB L2 cache per core and 8MB L3 cache per processor -- 8 cores/2 chips/4 cores per chip), 8GB of memory and IBM J9 Java(TM)6 Runtime Environment and Microsoft(R) Windows(R) Server 2008 Enterprise x64 Edition. Results referenced current as of March 30, 2009. Results submitted to SPEC(R) for review and will be posted at http://www.spec.org/jbb2005/results upon completion of successful review.

IBM System x3650 M2 server delivered 23.89 @ 17 Tiles -- the highest 2-socket and 8-core result as of March 30, 2009. VMmark disclosure report available here. Information and all results available at: http://vmware.com/products/vmmark/results.html VMware is a registered trademark and VMmark is a trademark of VMware, Inc. VMware VMmark is a product of VMware, an EMC Company. VMmark utilizes SPECjbb2005® and SPECweb2005®, which are available from the Standard Performance Evaluation Corporation (SPEC).

4.IBM BladeCenter HS22: 604,417 SPECjbb2005 business operations per second (SPECjbb2005 bops) and 151,104 SPECjbb2005 bops/JVM. Results referenced current as of March 30, 2009. Results submitted to SPEC® for review and will be posted at http://www.spec.org/jbb2005/results upon completion of successful review. SPEC and SPECjbb2005 are trademarks or registered trademarks of Standard Performance Evaluation Corporation (see http://www.spec.org/spec/trademarks.html for all SPEC trademarks and service marks).

IBM BladeCenter HS22


Highlights
Improve service with unparalleled RAS features and innovative management
Reduce cost through increased performance, utilization and efficiency
Manage growth and reduce risk on a BladeCenter® platform with proven stability

Designed for versatility
The IBM® BladeCenter HS22 offers flexible options to support a broad range of workloads, including virtualization and enterprise applications. Along with intuitive UEFI-based tools, the HS22 can be customized and deployed quickly while best-in-class reliability features help keep you up and running. Mix and match the HS22 with the industry’s most diverse set of chassis and blades and that go beyond x86.

Built for performance
The HS22 provides outstanding performance with support for the latest Intel® Xeon® processors, high-speed I/O, and support for high memory capacity and fast memory throughput. The HS22 can run applications up to twice as fast compared to previous generation blades. In fact, you can run many applications faster than even competitor four-socket blades.

Monday 19 September 2011

11th ITCN ASIA



ITCN Asia leads the way in delivering the best interactive opportunities to participants and an ideal environment to the audiences.

Ten years down the road, ITCN Asia 2011 shall witness a paradigm shift by delivering a powerful B2B event with more focus on relevant Technology in different Sectors and Sub-Sectors. The Event shall provide the right balance between IT, Telecom & Electronic industry players, senior decision makers and practitioners.

Apple Could Deliver a September Surprise With New MBPs

By John P. Mello Jr.
It was only last February that Apple rolled out its current line of MacBook Pros, but a recent report suggests it's getting ready for another refresh as soon as this month। The notebook bump would allegedly be relatively minor, adding slightly faster processors and not much else. But it could be just enough to give Apple yet another blowout holiday sales season.



Apple's (Nasdaq: AAPL) MacBook Pro laptop was upgraded with new Intel (Nasdaq: INTC) processors a little more than six months ago, but another refresh may be in the works that will bring revamped models to the shelves within the next two weeks.

The move is believed to be necessary to keep Apple's laptop up to date with Intel's latest Sandy Bridge quad-core processors.

Citing anonymous sources, AppleInsider reported Tuesday that the refresh will deliver marginal speed bumps to the laptop's performance. Other than that, no material changes over existing models will be ushered in with the update, according to the report.

The existing lineup of Intel processors in the MacBook Pro, which run at 2.0, 2.2 and 2.3 GHz, will be replaced with new Core i7 chips running at 2.4, 2.5 and 2.7 GHz, it stated.

Apple did not respond to a request for comment by MacNewsWorld on the rumored introduction.


Avoiding Consumer Regret
Hours after the AppleInsider report appeared, Decide.com -- a price-watching service that uses a number of data-mining and predictive technologies to make recommendations to consumers so they can make better purchasing decisions -- posted a "wait" warning for the MacBook Pro.

To arrive at a purchasing recommendation, Decide gathers news and rumors from thousands of sources across the Web and analyzes them with proprietary algorithms, explained the company's CEO Mike Fridgen.

"As those rumors for a particular product hit a critical mass, our editors will see that and change our recommendations from buys to waits," he told MacNewsWorld. "In the case of the MacBook Pro, we've just hit that point. It's gotten to a point where we believe now that a consumer would have regrets if they purchased a current version of the product."

Predictable Introductions
How much regret, however, may be debatable.

For most consumers, the difference between 2.2 and 2.4 GHz isn't going to be noticeable, asserted Carl Howe, research director for the Yankee Group in Boston. "It just makes people feel like they're not buying last year's technology," he told MacNewsWorld.

Although a jump from 2.0 to 2.4 gigahertz is significant, he conceded, "most people won't really pay very much attention to it."

"These machines are fast enough that most consumers don't tax them," he added.

Howe, who has studied Apple's introduction patterns over time, wouldn't be surprised if the company did a late-fall refresh of the line. "Apple introductions are remarkably predictable because of the sheer pacing of the market," he argued. "So, for instance, they very rarely do a Pro refresh in Q4 because they're refreshing all their consumer products just before Q4."

However, "Every now again they break the rule," he conceded, "and this sounds like it might be one of those. Although if they're going to break the rule, they usually do it as a minor update.

"Faster processor, yes," he added. "New case and design, no."

Blowout Holiday Numbers
The speed at which this latest refresh may be taking place could reflect Apple feeling market heat to push out new products faster. While six months between refreshes may seem fast in the Apple world, it isn't in the PC world, where the average refresh is three times a year, observed Stephen Baker, an analyst for the NPD Group.

"Apple hasn't had that fast a cadence, but clearly there's demand from the markeplace to turn over products faster than Apple has traditionally done," he told MacNewsWorld.

He contended that now is good time as any for a MacBook Pro refresh. "You're after back-to-school and you're ahead of the holiday season," he said.

In addition, because Apple had such outstanding fourth quarter sales last year, it will be difficult to surpass that performance this year, he continued. A refresh, even a small one, of a major product line may be just what it needs to keep pace with last year's numbers.

"They'd like to have some blowout holiday numbers this season, and upgrading and refreshing on the Mac, because it doesn't happen all that often, tends to be a great driver of volume for them," he said.

Snoozing Technique Could Help Keep Smartphone Batteries Fresh

Smartphones have notoriously short battery life, but a new approach to power management could help address that problem -- at least, while they're being used on WiFi networks. The ingenious technique could also keep tablets, laptops and other mobile devices running longer without a charge. "It's a clever little discovery," said Carl Howe, a director of research at the Yankee Group.


Researchers at the University of Michigan have come up with a way to extend the battery life of tablets, smartphones and other devices that use WiFi.

Kang Shin, a professor of computer science and engineering, and Xinyu Zhang, a doctoral student, have developed E-MiLi, a power management method that could cut energy consumption by about 44 percent for up to 92 percent of users in WiFi zones.

E-MiLi, or Energy-Minimizing Idle Listening, involves slowing down the rate at which the WiFi receiver retrieves packets, along with filtering out unnecessary packets.

That filtering is done by creating special headers that include the destination address of each packet, and then applying an algorithm developed by Shin and Zhang that detects packets addressed specifically to a particular receiver and wakes up the receiver only then.

"It's a clever little discovery," Carl Howe, a director of research at the Yankee Group, told TechNewsWorld.

E-MiLi works for "all WiFi-equipped mobile devices, including mobile phones, tablets and laptops, and can also be used by WiFi networks such as ZigBee," said Zhang.

"It significantly improves the energy efficiency of WiFi devices," he told TechNewsWorld. "For example, it extends battery life by 54 percent for smartphones."


The E-MiLi Art of Selective Snoozing
Since it's not possible to predict when packets will arrive at a WiFi receiver, Shin and Zhang decided to reduce the clock rate of the receiver during its IL period and have it wake up and respond to incoming packets as they arrive.

However, receiving packets at a lower clock rate is a problem, because the Nyquist rate requires that the receiver's sampling clock rate must be at least twice the bandwidth of the transmitted signal.

So, E-MiLi uses a new approach called "Sampling Rate Invariant Detection," or SRID.

This adds a special preamble or header to each packet of data and incorporates a linear-time algorithm that can accurately detect the preamble even if the receiver's clock rate is much lower than that of the transmitter.

SRID embeds the destination address into the preamble so that a receiver will only respond to packets destined for it.

On detecting the preamble, the WiFi receiver rockets into full clock rate and recovers the data packet.

"Everybody forgets WiFi is based on Ethernet, and there never was any attention paid to this preamble and power-saving stuff because it was designed for a world of desktop and server computers," the Yankee Group's Howe pointed out.

"When they made it wireless, they probably should have spent more time on this," he added.

Why We Need E-MiLi or Something Similar
WiFi's power-saving mode doesn't help much, because it can't reduce the idle listening (IL) time associated with carrier sensing and configuration, Shin and Zhang found.

IL consumes as much energy as active transmission and reception, and WiFi clients spend lots of time in IL because of technical issues such as media access control (MAC)-level contention and network-level delays, noted Shin and Zhang.

Even with PSM enabled, IL accounts for more than 80 percent of energy consumption for clients in a busy network and 60 percent in a relatively idle network.

WiFi receivers must constantly be in IL mode because packets arrive unpredictably, and also because WiFi receivers must find a clear receiving channel.

That's true for all wireless radio receivers -- for example, Bluetooth hops among 79 channels in order to minimize receiving errors, according to a dissertation by Texas A&M University student Ahmed Ahmed Emira. Each Bluetooth packet has to arrive within a 625-microsecond slot.

Where E-MiLi Might Go
E-MiLi lets SRID be integrated into existing MAC or sleep-scheduling protocols by adding a downclocked IL mode into the receiver's state machine through Opportunistic Downclocking (ODoc).

ODoc takes a smart approach to downclocking, assessing the potential benefit of doing so before letting the receiver downclock.

Shin and Zhang's tests show that E-MiLi can detect packets with close to 100 percent accuracy even if the receiver operates at one-sixteenth the normal clock rate.

E-MiLi reduces energy consumption consistently across different traffic patterns without any noticeable performance degradation, Shin and Zhang found.

Device manufacturers will likely show the most interest in E-MiLi, said Zhang, because it runs in the hardware, firmware and device drivers of WiFi cards.

FTC: Mobile Apps Privacy Protection Not Just for Kids

By John K। Higgins


The legal basis for FTC action on privacy leans heavily on the agency's mandate to regulate deceptive practices. "FTC actions to date with regard to adult consumer data privacy and security have dealt with companies that do not follow their own policies, or have misleading policies or no notice of their policies at all," said Alan Friel, a partner with Wildman Harrold. Such deficiencies are considered deceptive practices.


China E-Payments: Learn how to Take Advantage of China's Growing B2C E-Commerce Market. An estimated 530 million Chinese are online - a number larger than the total population of the US. Are you prepared to take advantage of the growing B2C E-commerce market in China? Free webinar Wednesday, September 21.



Providers of apps for mobile devices are just as responsible as other electronic commerce vendors in terms of protecting the privacy of customers. In a recent enforcement action, the Federal Trade Commission (FTC) signaled that mobile apps fall within the agency's jurisdiction, and that it will not hesitate to investigate potential privacy violations associated with mobile apps.

The enforcement action involved a complaint against a publisher of electronic games, and it marked the first time the FTC initiated a privacy case involving apps for mobile devices.

W3 Innovations, through its Broken Thumbs Apps unit, developed and distributed mobile apps for the iPhone and iPod touch that allowed users to play games and share information online. Several of the apps were directed to children and were listed in the Games-Kids section of Apple's (Nasdaq: AAPL) App Store. There were more than 50,000 downloads of those apps, according to the FTC.

While the W3 Innovations case was largely based on provisions related to the Children's Online Privacy Protection Act (COPPA), a key element in the case was FTC's determination that mobile apps are subject to its jurisdiction regarding privacy protection for all users, regardless of age.


Not Just for Kids
"The case represents the FTC's first enforcement action against a mobile app developer, and it seems to send a clear message that mobile app developers should follow the same rules as more traditional websites when it comes to consumer privacy issues and privacy policies, especially when marketing to children," states Wildman, Harrold, Allen & Dixon in an analysis of the case posted online.

"There is no doubt that the evolution of consumer data privacy we are currently experiencing includes mobile," Alan Friel, a partner with Wildman Harrold, told TechNewsWorld.

The FTC has more than just hinted that mobile apps of all types are on its regulatory radar screen. The W3 Innovations case arose "because we have been paying attention to that area," Claudia Farrell, a spokesperson for the agency, told TechNewsWorld. Additional mobile app inquiries are in the pipeline at the FTC.

"Although the FTC does not enforce any special laws applicable to mobile marketing, the FTC's core consumer protection law -- Section 5 of the FTC Act -- prohibits unfair or deceptive practices in the mobile arena," David Vladeck, director of FTC's Bureau of Consumer Protection, said at a Senate hearing last May.

The FTC "is making a concerted effort to ensure that it has the necessary technical expertise, understanding of the marketplace, and tools needed to monitor, investigate, and prosecute deceptive and unfair practices in the mobile arena," Vladeck added.

The legal basis for FTC action on privacy leans heavily on the agency's mandate to regulate deceptive practices -- rather than a standard that relates to invasion of privacy per se.

"FTC actions to date with regard to adult consumer data privacy and security have dealt with companies that do not follow their own policies, or have misleading policies or no notice of their policies at all," Friel said. Such deficiencies are considered deceptive practices.

Industry Active on Mobile Front
The issue of mobile apps privacy has suddenly become significant for online businesses. In early September, for example, the Software & Information Industry Association (SIIA) joined the Future of Privacy Forum's Application Privacy Working Group and became a sponsor of FPF's Application Privacy project.

SIIA's participation with FPF is aimed at helping to develop voluntary privacy principles and best practices for mobile software applications. The goal is to lessen the likelihood of burdensome government regulation.

"Mobile app developers have a responsibility to create and disclose their privacy policies when they collect and use personal information. We are joining this effort out of the conviction that the industry does not need government regulation to move us in the direction of providing a trusted environment for our users," said Mark MacCarthy, vice president of public policy at SIIA.

While the W3 Innovations case highlighted the mobile apps privacy issue, SIIA's involvement with the FPF project was not solely based on the FTC's action.

"The W3 case was focused on information about children and is generally applicable to all mobile app providers insofar as they collect information about children. The need for good privacy practices is broader than that, and it was this broader concern for good data protection practices that motivated SIIA to affiliate with the Future of Privacy Forum," MacCarthy told TechNewsWorld.

"Continued growth and innovation in the vibrant mobile marketplace is dependent on consumer confidence in the privacy protections provided by mobile application providers. While many mobile application developers are transparent about their collection, use, and protection of consumer data, recent reports have indicated that this is not always the case," MacCarthy said.

Mobile apps providers will need to keep a sharp eye on how privacy eventually is regulated.

Self-Regulation Questioned
"FTC leadership has been fairly vocal in expressing its dissatisfaction with the effectiveness of current self-regulatory efforts. Congress too has grown inpatient, and a half dozen bills are under consideration that may result in greater regulatory authority for the FTC and requirements for greater transparency, choice, and security for consumers regarding their data, particularly regarding behavioral advertising, which tracks and targets consumer behavior and mobile," Friel said.

The class action bar has brought more than 50 lawsuits this year dealing with online and mobile tracking or targeting, he noted. "The issue is not going away soon."

In the W3 Innovations case, the apps developed by the company encouraged children to email their comments -- such as "shout-outs" to friends and requests for advice -- to a company-generated site. The FTC alleged that the publisher collected and maintained more than 30,000 email addresses in violation of federal regulations, including parental notice.

In addition, the FTC alleged that the defendants allowed children to publicly post comments, including personal information, on message boards.

Without admitting to the allegations, the company settled the case with the FTC on August 12. The firm consented to pay a US$50,000 penalty. The settlement also bars the company from future violations of the COPPA rule and requires the publisher to delete all personal information collected in violation of the FTC's rules.

W3 "did not ask for or collect information about the age of our users because there was no technical or functional need for this information," the company said in a statement provided to TechNewsWorld by Barry Reingold, an attorney with Perkins Coie.

W3 Innovations maintained that "any violations were inadvertent."