Cloud Computing Maturity Model – IDC

IDC’s Cloud Maturity Model traces the increasing value and investment of cloud computing across five stages, from Ad Hoc to Optimized. The sections that follow feature descriptions of the stages and highlights of the fundamental outcomes of  each stage,

cloud-hypermarket-infographic-2010-thumbnail

Stage 1: Ad Hoc

Companies are beginning the exploration process to increase their awareness of  cloud technology options, key considerations, and cloud’s contribution toward IT  efficiency. There is limited enterprisewide awareness of these activities, and some  instances may be unauthorized. Some are turning to cloud because of the immediacy  of the need and the ability to procure capacity with minimal monthly or one-time  investments that require little or no outside approval.

Stage 2: Opportunistic
Companies are experimenting with more standardized offerings and developing shortterm improvements regarding access to IT resources via cloud. They are also  promoting buy-in to cloud computing across the company and acknowledging the  need for a company wide approach. They are testing their ability to transition  workloads from existing traditional in-house or outsourced IT deployments as well as  new ones. They consider cloud for new solutions or isolated computing environments  with minimal impact on existing business processes, lower implementation costs,  and/or faster delivery for commodity resources.
Stage 3: Repeatable
Companies are enabling more agile access to IT resources through aggressive standardization, identifying cloud best practices, and increasing governance. Business  and IT users are beginning to rely on self-service portals to access cloud services  based on cost and quality of service as well as to automate approvals and workflows  that are necessary to rapidly provision and activate services. Users have access to a  wider range of resources with more predictability, transparency into the cost of those  resources, and the ability to more easily forecast their IT resource requirements.
Stage 4: Managed
Companies are expanding the boundaries of how and why they use cloud. This is a  consistent, best practice enterprisewide approach to cloud, speeding iterative  improvement cycles to increase cloud adoption and business value. Companies in  this stage are orchestrating service delivery across an integrated set of resources and  collaborating internally and externally to support their future technology needs. Users  can procure additional services, add new users, and increase or decrease compute  capacity as needed through self-service portals, expanding the organization’s ability to operate not just more efficiently but also more strategically.
Stage 5: Optimized
Companies are driving business innovation through seamless access to IT resources from internal and external service providers and making informed decisions based on  the true cost and value of those services. They are using cloud to lower the costs and  speed up the delivery process. The business impact is most noticeable for new initiatives as well as for high business value or highly innovative projects, where some  level of customization of IT resources is critical and risk sharing creates an environment that fosters innovation. These organizations have the ability to leverage their IT capabilities as a component of new products and services. IT is an equal  partner in achieving long-term business goals, and IT is responsible for ensuring the  successful delivery of IT capabilities throughout the life cycle of those technologies.

Defining Progress A cross People, Process, and Technology
IDC track the five stages through people, process, and technology, with eight measures that are applicable to private, public, and hybrid cloud deployments. They have  chosen these measures because we believe that they require deliberate attention and  change to maximize the value from cloud investments.
People: They have segmented “people” into two core measures — IT roles and  business roles — because we see an evolution in how the IT and business  groups develop separately and in their increasing coordination through the  adoption of cloud. Through the five stages, we consider attributes such as skills,  culture, leadership, organizational structure, and interdepartmental relationships  for these two measures.
Process: They have segmented “process” into three core measures:
Vendor management: They have selected this measure because organizations  will need to change the quantity and mix of vendors that they work with as  well as the way that they work with them, considering attributes such as procurement, contract definition, compliance, incident management,  innovation, and business continuity.
Service management: They have selected this measure because managing  cloud services requires a transition from a traditional model to an end-to-end service delivery focus that defines and manages IT capabilities in terms of  policies and service-level agreements (SLAs). Elements of service  management include service definitions, configuration standardization, SLAs and policies, service performance and consumption measurement,  forecasting, and chargeback.
Architecture, security, and integration: They have selected this measure  because cloud represents a fundamental shift in the way that IT  environments are designed and managed. Cloud environments rely on welldefined standards to enable workload and information portability across a  wide range of heterogeneous internal and external resources. The ability to  create, deploy, and optimize end-to-end services that can fully exploit the  self-service, portability, and elasticity capabilities that are provided by cloud
architectures is fundamental to achieving a mature cloud environment.
Technology management: They have segmented technology into three core  measures because we see adoption of cloud-enabled platform, infrastructure,  and software occurring at different rates. However, we also expect the  technology to continue to evolve over time, so these measures are less about the  technology itself and more about management, considering attributes such as  adoption rates and ease of adoption, deployment models (public, private, or  hybrid), interdependencies, technology maturity and risk, and transparency into a  vendor’s technology stack.

The three core measures are:
Platform: Encompasses functionality enabled by application development,  testing, database, analytics, middleware and related packaged, open source, and custom software including public PaaS services.
Infrastructure: Encompasses functionality enabled by physical and virtual  systems, storage, and network hardware and public IaaS services as well as  functionality enabled by packaged, open source, and custom software and  SaaS services providing infrastructure software functionality including  operating systems, hypervisors, cloud system software, security and identity management, system management, storage management, and network  management.
Software: Encompasses functionality enabled by packaged, open source, and custom application software including SaaS-based application software  solutions. Examples include collaborative apps, content apps, CRM, ERM,  SCM, ops and manufacturing apps, and engineering apps

Google App Engine

Why App Engine

Google App Engine enables you to build web applications on the same scalable systems that power Google applications. App Engine applications are easy to build, easy to maintain, and easy to scale as your traffic and data storage needs grow. With App Engine, there are no servers to maintain: You just upload your application, and it’s ready to serve to your users.

App Engine is a complete development stack that uses familiar technologies to build and host web applications. With App Engine you write your application code, test it on your local machine and upload it to Google with a simple click of a button or command line script. Once your application is uploaded to Google we host and scale your application for you. You no longer need to worry about system administration, bringing up new instances of your application, sharding your database or buying machines. We take care of all the maintenance so you can focus on features for your users.

You can create an account and publish an application that people can use right away at no charge from Google, and with no obligation. When you need to use more resources, you can enable billing and allocate your budget according to your needs. Find detailed pricing for usage that has exceeded the free quota on our Billing page.

Automatic Scalability

For the first time your applications can take advantage of the same scalable technologies that Google applications are built on, things like BigTable and GFS. Automatic scaling is built in with App Engine, all you have to do is write your application code and we’ll do the rest. No matter how many users you have or how much data your application stores, App Engine can scale to meet your needs.

The Reliability, Performance, and Security of Google’s Infrastructure

Google has a reputation for highly reliable, high performance infrastructure. With App Engine you can take advantage of the 10 years of knowledge Google has in running massively scalable, performance driven systems. The same security, privacy and data protection policies we have for Google’s applications applies to all App Engine applications. We take security very seriously and have measures in place to protect your code and application data.

Currently, Google App Engine supports Java, Python, PHP, and Go. Additionally, your website templates can include JavaScript along with your HTML which, among other things, allows you to write AJAX-enabled web applications.

Google App Engine made a splash when it launched in the spring of 2008. It was different from most other cloud systems back in the day because it was neither IaaS (Infrastructure-as-a-Service, e.g., Amazon EC2) nor SaaS (Software-as-a-Service, e.g., Salesforce). It was something in-between and ushered in the era of PaaS (Platform-as-a-Service). Instead of a fixed application (SaaS) or raw hardware (IaaS), App Engine managed your infrastructure for users. Furthermore, it provided a development platform… users get to create apps, not used the one provided by the cloud vendor, and it leveraged the infrastructure as a hosting platform.

The development-to-release cycle is minimized because high-level services that developers would normally have to build are already available via an App Engine API. A development server is provided to let users test their code (with certain limitations) before running in production. And finally, deployment is simplified as Google handles that all. Outside of setting up an account and billing structure, there is no machine setup or administration as Google takes care of all logistics there too. Even as your app is running with fluctuating network traffic, the App Engine system auto-scales up to allocate more instances of your app as needed, then similarly releases resources when no longer needed.

A web developer can now use Google’s infrastructure, finely tuned for speed and massive scaling, instead of trying to build it themselves. In the past, developers would create an app, generally need a machine or web hosting service that could host a LAMP stack, administer each of the “L”, “A”, “M”, and “P” components, and somehow made the app globally accessible. Moreover, developers were also generally responsible for the load-balancing, monitoring, and reporting of their systems, and to reiterate one of the most difficult and expensive things to build yourself: scaling. All of these are taken care of by App Engine.

By now, you have a good idea as to why Google developed App Engine. To put it simply, to remove the burden of being a system administrator from the developer. Using a LAMP stack involves choosing a distribution of Linux, choosing of the kernel version, etc., configuring PHP and an Apache web server. There is also the need to run and manage a database server (MySQL or otherwise) and other aspects of a running system (monitoring, load-balancing, reporting). The list continues with managing user authentication, applying software patches and performing upgrades, each of which may break your app, bringing even more headaches for developers/sysadmins.

Other than the actual application, everything else is nearly orthogonal to the solution that developers are trying to create for their customers. App Engine attempts to handle these complexities to let you focus on your app(s). An app running on App Engine should be easy to build, manage, and scale.

PaaS and what isn’t App Engine?

Some users confuse Google App Engine with Amazon’s EC2 service. The problem is that this is an apples to oranges comparison. Both operate at different cloud service levels, and each have their strengths and minuses. With App Engine, you only need to worry about your application and let Google take care of hosting and running it for you. With EC2, you’re responsible for the app, but also its database server, web server, operating system, monitoring, load-balancing, upgrades, etc. This is the reason why typically, the costs for IaaS services run lower than that of PaaS services because with PaaS, you’re “outsourcing” more work/responsibility. Cost estimates usually clouded by not considering the administration overhead when managing the infrastructure yourself. A better “apples-to-apples” comparison would be EC2 to the Google Compute Engine IaaS service.

PaaS systems also differ from that of the SaaS layer above, as SaaS applications are fixed and must be taken directly from the respective cloud vendor. Unless you work with or at the vendor, you cannot change the SaaS application you use. It’s quite the opposite with PaaS systems because you (as a PaaS user) are the developer, building and maintaining the app, so the source code is your responsibility. One interesting perspective is that you can use a PaaS service to build and run SaaS apps!

Language Runtimes

App Engine lets engineers use familiar development tools and environments to build their applications. This includes the Python, Java, and Go languages. Because App Engine supports Java, a host of other languages which run on the Java Virtual Machine (JVM) are also supported… these include (but are not limited to): Scala, Ruby (JRuby), Groovy, PHP (Quercus), JavaScript (Rhino), and Python (Jython). (If you’re curious about the Jython support (running Python on the Java runtime vs. the pure Python runtime), it’s a great way to leverage a legacy Java codebase while developing new code in Python as Jython code can work with both Python and Java objects.)

Sandbox

Security is gravely important. Developers (typically) would not be interested in letting other applications/users get any kind of access to their application code or data. To ensure this, all App Engine applications run in a restricted environment known as a sandbox.

Because of the sandbox, applications can’t execute certain actions. These include: open a local file for writing, open a socket connection, and make operating system calls. (There used to be more restrictions, but over time, the team has tried to bump up quotas and remove as many restrictions as they can. These don’t make the airwaves as much as bad news does.)

Services

Any network developer would say, “Without being able to support two-way socket connections, you can’t create useful applications!” The same may be true of the other restrictions. However, if you think about it carefully, why would you want use these lower-level OS features? “To make useful apps with!” You want to use outbound sockets to talk to other processes, and perhaps you may want inbound sockets to listen for service requests.

The good news is that the App Engine team knows what you want, so that’s why the team has created a set of higher-level APIs/services for developers to use. Want your app to send and receive e-mail or instant messages? That’s what the e-mail and XMPP APIs are for! Want to reach out to other web applications? Use the URLfetch service! Need Memcache? Google has a global Memcache API. Need a database? Google provides both its traditional NoSQL scalable datastore or access to the relational MySQL-compatible Google Cloud SQL service.

The list of all the services that are available to users changes quite often as new APIs are created. At the time of this writing, the following services/APIs are available to developers:

  • App Identity
  • Appstats
  • Backends
  • Blobstore
  • Capabilities
  • Channel
  • Cloud SQL
  • Cloud Storage
  • Conversion
  • Cron
  • Datastore
  • Denial-of-Service
  • Download
  • Federated Login (OpenID authentication)
  • Files
  • (Full-Text) Search
  • Images
  • LogService
  • Mail
  • MapReduce
  • Matcher
  • Memcache
  • Namespaces/Multitenancy
  • NDB (new database)
  • OAuth (authorization)
  • Pipeline
  • Prospective Search
  • Task Queue (Pull and Push)
  • URLfetch
  • Users (authentication)
  • WarmUp
  • XMPP

You can read more about most of these APIs in the official APIs docs pages. (Docs for the others are available but not on this page.) Also, The Google App Engine team is constantly adding new features, so keep your eyes on the Google App Engine blog for announcements on new and updated services and APIs.

Administration

One of the benefits in choosing to host your apps on PaaS systems is being freed from administration. However, this means giving up a few things… no longer do you have full access to your logs or be able to implement custom monitoring of your app (or your system). This is further impacted by the sandbox runtime environment mentioned above.

To make up for some of this lack of access to application and system information, Google has provided various tools for you to gain a better insight into your app, including its performance, traffic, error rate, etc.

The first tool is an administration console. (App Engine provides two “admin consoles” actually.) A fully-featured version is for your application running in production while the other one is a lightweight version for the development server.

. The team has added so many new features that the current incarnation of the dashboard includes far more features than are illustrated here. However, the general structure and information displayed is relatively the same.

Another tool is a general system status page, While it is not an indication of how any one particular app is doing, it does show what is going on with the system as a whole.

The final tool is Appstats. It is in the same class as a profiler but custom-made to help you determine inefficient ways your code may be interacting with App Engine services (rather than traditional profiling of code coverage, memory usage, function call metrics, program behavior, etc.). Its specific use is best described in App Engine team’s introductory blogpost:

“Appstats is a library for App Engine that enables you to profile your App Engine app’s performance, and see exactly what API calls your app is making for a given request, and how long they take. With the ability to get a quick overview of the performance of an entire page request, and to drill down to the details of an individual RPC call, it’s now easy to detect and eliminate redundancies and inefficiencies in the way your App Engine app works.”

Applications (web & non-web)

While many applications running on Google App Engine are web-based apps, they are certainly not limited to those. App Engine is also a popular backend system for mobile apps. When developing such apps, it’s much safer to store data in a distributed manner and not solely on devices which could get lost, stolen, or destroyed. Putting data in the cloud improves the user experience because recovery is simplified and users have more access to their data.

For example, the cloud is a great place for mobile app user info such as high scores, contacts, levels/badges, etc. If users lose their phone, they would only need to get a new phone and reinstall the application. All their data can be streamed from the cloud after that. Not only is recovery simplified, but it makes possible scenarios like users being able to pull up their leveling or high score info from the home computer upstairs in the evenings while their mobile phones charge downstairs. Again, cloud can be a tool to provide a better user experience!

When developing a backend for mobile applications, the same decision needs to be made on whether a company should host it themselves or take it to the cloud. Do you spend time and resources building out infrastructure, or is it better to leave it to companies that do this for a living and focus on the application instead?

Mobile phones only need to be able to make outbound HTTP requests to instantly connect to your App Engine backend. You can control the application-level protocol, authentication/authorization, and payload contents, so it’s not any more complex than providing a similar backend for a traditional web app.

Migration to/from App Engine

The section title alone cannot convey all the aspects of this category when considering cloud vendors. It includes migration of applications to/from your target platform (in this case App Engine), ETL and migration of data, bulk upload and download, and vendor lock-in.

Porting your applications to App Engine is made simpler by providing familiar development environments, namely Java, Python, and now Go. Java is the de facto standard in enterprise software development, and developers who have experience building Java servlets will find App Engine to be quite similar. In fact, Java App Engine’s APIs adhere as closely with existing JSR (Java Specification Request) standards as possible.

In addition to the servlet standard (JSR-154), App Engine supports the JDO and JPA database interfaces (or you can choose to use Objectify or the low-level interface directly). If you’re not comfortable with NoSQL databases yet, you can use Google Cloud SQL, the MySQL-compatible relational cloud database. The App Engine URLfetch service works like the Java SE java.net.URL class, the App Engine Mail API should work just like the javax.mail (JSR-919) class, the App Engine Memcache API is nearly identical to using the javax.cache (JSR-107) class, etc. You can even use JSP for your web templates.

On the Python side, while Google ships a lightweight web framework (webapp/webapp2) for you to use, you aren’t limited to it. You can also use: Django, web2py, Tipfy, Bottle, and Pyramid, to name some of the more well-known frameworks that work with Python. Furthermore, if you have a Django app and use the third-party Django-nonrel package (along with djangoappengine), you can move pure Django apps onto App Engine or off App Engine to a traditional hosting service supporting Django with no changes to your application code outside of configuration. For users choosing Cloud SQL instead of App Engine’s traditional NoSQL datastore, you can use Django directly as there is an adapter specially written for Cloud SQL.

Next are the SDKs. For all supported runtimes, they are open source. This allow users to become familiar with the App Engine client-side libraries and possibly build their own APIs. In addition, if users desire to control the backend, they can use the SDK and the client APIs to come up with corresponding backend services. Not comfortable letting Google host and run your app(s)? This gives you an alternative. In fact, there are already two well-known App Engine backend projects: AppScale and TyphoonAE. Both claim they are 100% API-compatible, meaning that any App Engine app that Google can run, they should be able to as well.

Next, you have control over your data. When using the traditional datastore, Google provides a datastore bulkloader. This tool lets you easily upload or download all of your data. You can find out more about the bulkloader in the official docs. Other features in App Engine related to your data include backup/restore, copying, or deleting your data. Find out more about those also in the official docs. Similarly, if using Google Cloud SQL, you can easily import or export your data using Google Cloud Storage as an intermediary. You can read more about that at the Cloud SQL docs on import/export.

Finally, with all the advantages of a PaaS system like Google App Engine, some developers may wonder about “vendor lock-in,” a situation describing how it may be difficult or impossible for companies to move their apps and/or data to similar or alternative systems. While Google would love you to stay as an App Engine customer for a long time, Google recognize that having choices makes for a healthy and competitive ecosystem.

If Google is a cloud vendor and App Engine is its product, does vendor lock-in apply? Well, yes and no. Think of it this way: you use Google’s infrastructure to avoid having to build it yourself. Arguably this is one of the most difficult and time-consuming things to do. So you can’t get something for nothing. The price you pay is that you need to integrate your app so that it connects to all of App Engine’s components.

However, while Google recommend that you code against Google App Engine’s APIs, there are workarounds. Also, think back to why you want to use App Engine… for its robustness and scalability. Google created our APIs so you could take advantage of Google’s infrastructure and not as a deliberate attempt to force you into using Google’s APIs.

By allowing alternative ways of accessing your data, using familiar development environments, following open standards, and distributing open source SDK libraries, Google fights vendor lock-in on your behalf. It may not be easy to migrate, but Google has implemented features to make it easier to migrate your app or upload/download all of your data. Google tries hard to ensure that you can move your apps or data onto or off of App Engine. But it doesn’t stop there… the team is continually innovating, listening to user input, and improving and simplifying App Engine services to further provide a development environment of choice for building your web (and non-web) apps. Finally, the best practices you’ll learn in creating great App Engine apps can be used for application development in general, regardless of the execution platform.

Other Important Features

The final two remarks here pertain mostly to enterprises. Google App Engine is proud to be compliant with global security standards. Google are certified SAS 70 compliant as well as compliant for its replacements SSAE 16 and ISAE 3402.

Enterprises who are Google Apps customers will find App Engine to be an extremely useful tool. When you purchase Google Apps, Google provide a default set of applications that help you run your business. In addition to the apps from Google, there are also many more built by third-party software firms that you may find compelling in the (Google Apps Marketplace]

If none of the applications above meet your needs, you can use Google App Engine to build your own custom applications and roll them into your Google Apps domain and manage them from the same control panel as if you bought it from Google directly or from vendors in the Apps Marketplace.

Appistry

Appistry solutions leverage cutting-edge cloud-based architectures

Cloud-based architectures, with their inherent distribution of storage and computation, provide an ideal foundation for large-scale analytics, where many gigabytes or terabytes of data must be stored and processed. By designing solutions that leverage the inherent scalability, capacity, performance, simplicity and cost-efficiency of cutting-edge cloud technology, we help companies access computational power unlike any currently available.

Our cloud-based architectures unify large quantities of affordable, commodity systems with directly-attached storage, all working in concert to provide you with a single system view that transcends the performance and capacity of any single machine within the system.

The result is a system that combines three core technologies into a single unified platform. First, our system is a High Performance Computing system. Second, it is a Cloud Computing platform. And third, it is a complex analytics platform. Appistry combine all three together with unified storage and computation in a system of unlimited flexibility and power at a truly affordable price.

Appistry’s cloud-based analytics solutions give you:

  • Performance — Distributed processing and data agility can easily deliver 10-100x performance gains over “big iron” deployments
  • Scalability — An administrator is able to add additional computers, including their storage capacity, to a running system without a loss of availability of files or administrative functionality. Because tracking and membership are fully distributed and dynamic, the overall system can grow to tens of thousands of systems, or more.
  • Capacity — The analytics system provides a global view or namespace, aggregating the compute and storage capacity of all attached servers.
  • Reliability — By allowing the user to specify how many copies of each file to maintain in the system, the system is able to offer high levels of reliability at low cost.
  • Geographic Distribution — A single instance of a cloud-based analytics system can be deployed across multiple data centers. The cloud is aware of the network topology and will mirror and distribute files across the network so that the loss of any one data center does not limit access to data.
  • Disaster Recovery — The system is fully distributed; there is no central point of failure. Data ingest and analysis can continue operation even when entire data centers have been removed from the system.
  • Availability — Every computer in the analytics system is capable of performing analytics computations, managing data and responding to administrative requests. As a result, the system is impervious to the loss of individual or even entire racks of machines.
  • Management Simplicity — Administrators are able to update computer configurations, system configurations, or any of the running analytics applications without taking the files offline.
  • Hardware Flexibility — Not all machines in the system need to be constructed from similar hardware. The system can recognize the attributes of each attached computer and utilize their resources accordingly.

High-resolution satellites, multimodal sensors and other input sources are driving an explosion in data available to the Intelligence community. This presents a data processing challenge.
Ayrris™ / DEFENSE overcomes these challenges by providing high-volume data ingest, storage, analysis and delivery. By leveraging Appistry’s revolutionary Computational Storage™ technology, Ayrris
turns a cluster of commodity servers into a high-performance, petabyte-scale distributed storage system with no single points of failure or I/O bottlenecks. Rather than move the data to the application, we prefer to move the application to the data. Because of its unique computing platform, Ayrris / DEFENSE offers a new level of scalability, elasticity and reliability for dataintensive applications, and is fully compatible with existing agency data sources and analysis tools. Ayrris / DEFENSE allows enterprises to quickly turn raw data into useable, mission-critical intelligence better, faster and cheaper than ever before.

Storage Trends
The following three technology trends are having a dramatic impact on the way big data challenges will be addressed:

  • Transitioning Storage Systems to Cloud Technologies
  • Commoditization of Storage Equipment
  • Move Towards Data Localization

Industry progress in these areas provides solutions for the construction of large data storage systems.

Cloud computing architectures, on the other hand, are characterized by their use of large quantities of affordable, commodity systems with directly-attached storage, all working in concert to provide the user with a single system view that transcends the performance and capacity of any single machine within the system. A storage system
built in this manner provides the following attributes:

Scalability. A cloud storage administrator is able to add additional computers, including their storage capacity, to a running system without a loss of availability of files or administrative functionality. Because tracking and membership are fully distributed and dynamic, the overall system can grow to tens of thousands of systems, or more.
Capacity.The cloud storage system provides a global view or namespace, aggregating the capacity of all attached storage devices.
Reliability. Cloud storage allows the user to specify how many copies of each file to maintain in the system. The cloud is aware of the loss of any machines in the system. When these errors occur, the cloud can alert the proper administrators and take appropriate action to recover the requested reliability level.
Geographic Distribution. A single instance of a cloud storage system can be deployed across multiple data centers. The cloud is aware of the network topology and will mirror and distribute files across the network so that the loss of any one data center does not limit access to data.
Disaster Recovery. The storage system is fully distributed; there is no central point of failure. Cloud storage can continue operation even when entire data centers have been removed from the system. Cloud storage also manages the merging of multiple data centers after a logical or physical separation occurs. Out-of-date files are located and reconciled without user-intervention whenever possible.
Availability. Every computer in the cloud system is capable of serving access to files or administrative requests. Cloud storage is easily able to service a large number of client requests by distributing the work across many machines. The system is impervious to the loss of individual or even entire racks of machines.
Manageable. Administrators are able to update computer configurations, system configurations, or update the cloud system itself without taking the files offline.
Heterogeneous. Not all machines in the cloud system need to be constructed from similar hardware. The system needs to recognize the attributes of each attached computer and utilize their resources accordingly.
By taking a cloud-oriented approach to storage and compute, we are able to deliver a more powerful system. Moreover, because cloud storage systems are built with commodity components, they are much less expensive than traditional approaches.

Historically, system architects and administrators have depended on increasingly larger and larger machines and devices to satisfy their growing computational and storage needs. These high-end, proprietary systems have come at a steep price in terms of both capital and operational costs, as well as in terms of agility and vendor lock-in. The advent of storage and computational systems based on cloud architectures results in an advantageous economic position for purchasers and users of these solutions.

appistry

Move Towards Data Localization
In traditional system architectures, computational elements (i.e. application servers) and storage devices are partitioned into separate islands, or tiers. Applications pull data from storage devices via the local or storage area network, operate on it, and then push the results back to the storage devices. As a result, for most traditionally architected applications, the weak link in the system is the bottleneck between the application and its data.
Data localization is the unification of storage devices with computational elements for the purposes of reducing computational latency and overcoming network bottlenecks.
In a system exhibiting data locality, the work is moved to the data instead of the data being moved to the work. CloudIQ Storage was built from the ground up to enable data localization, which Appistry calls computational storage™. Other examples of data localization in practice include the Apache Hadoop project—an implementation of the MapReduce algorithm initially popularized by Google, Netezza’s data warehouse appliances, and various data caching technologies.
One way to compare the relative performance of traditional and cloud storage approaches and to quantify the performance benefits of computational storage is to look at the aggregate bandwidth available between application processing and storage.

Taken together, the impact of cloud computing architectures, the commoditization of storage and compute, and the move towards data localization are revolutionizing the delivery of data-intensive applications; solving problems once thought to be unsolvable because of economic or temporal concerns has now become possible.
Appistry CloudIQ Storage: Cloud Technology Applied to Big Data
Appistry CloudIQ Storage applies cloud computing architectural principles to create a scalable, reliable and highly cost-effective file storage system with no single points of failure, using only commodity servers and networking.
A CloudIQ Storage system is composed of multiple computers at one or more data centers,

CloudIQ Storage coordinates the activity of each of the computers and aggregates their attached storage to expose a single, logical data store to the user. The system is fully decentralized: each computer is a complete CloudIQ Storage system unto itself, but is aware of other members of the same storage system and shares storage responsibilities accordingly.
Several of the major functional characteristics of the CloudIQ Storage system are below. We consider these to be essential architectural characteristics of any cloud-based storage system.
Self-Organizing and Scalable
Appistry believes that fully distributed systems with self-healing and self-organizing properties are the path to solving big data challenges. The foundation of the CloudIQ storage architecture is a lightweight, yet robust membership protocol. This protocol updates the member machines with the addition, removal or unexpected loss of computers
dedicated to the storage system. This shared membership data contains enough information about the members of the cloud that each individual machine is capable of assessing its location and responsibilities. These responsibilities include establishing
proper communication connections and responding to system events requiring healing actions. Even though there is no central control structure, the system is capable of selforganizing thousands of machines.
An administrator can easily add or remove machines or update configurations. The members of the cloud, acting independently, will share information quickly and reconfigure appropriately. The storage cloud can elastically scale up to handle multiple petabytes without heavy management overhead.

Geographically Aware
One desired feature of a robust storage system is location awareness. Computers within the CloudIQ Storage environment can use location awareness to make informed decisions about reliability configurations and to optimize the handling of user requests.
CloudIQ Storage introduces the notion of a territory to be a logical collection of machines classified as a group for the purpose of data distribution. Users typically assign territories in one of several ways:

Computer Rack or Network Switch. This configuration allows an administrator to instruct a storage cloud to distribute files and functionality across systems within a single data center.
Data Center. This configuration allows an administrator to inform the cloud to distribute files and functionality between data centers.
User-Based. For storage clouds that span multiple geographies, it is beneficial to   inform the system which computers are dedicated to individual user groups. Often this is a similar configuration to the data center option.
Hardware-Based. This configuration allows different configurations of hardware to be grouped together. These groups provide the administrator with a method to store data on specialized hardware for different needs. For example, within a data center one might have two territories of low-latency computers set up across racks for availability. A third collection of machines might be constructed of higherstorage- density, higher-latency hardware to keep costs low while maintaining a third copy of the data.
Territory settings can be configured on a machine-by-machine basis. Administrators can choose from any of these use cases or develop hybrid configurations that meet their needs.
CloudIQ Storage uses territory settings to implement the behaviors described in the remainder of this section.
Reliable
CloudIQ Storage provides high levels of reliability by distributing files and their associated metadata throughout the storage system. Each copy of a file possesses audit and configuration information needed to guarantee reliability requirements are achieved. The metadata of each file contains information on:
Reliability Needs. How many copies of a file need to be maintained?
Territory Distribution. Which territories can/should be used to keep a copy of the files?
Update History. What is the version history of each file?
The reliability requirements of each file in the system are distributed across the machines in the CloudIQ Storage system. Each machine watches over a subset of files in the system. When these monitors detect system changes, the following actions occur to guarantee the reliability needs of the system:
File Reliability Checks. Each monitor examines the files for which it is
responsible. If a machine holding a copy of the file has been lost, additional copies of the file are created.
File Integrity Checks. If a dormant or lost machine attempts to introduce an old copy of an existing file, the system reconciles the version against the metadata of the newer files and acts to reconcile the difference.
System Monitoring Reconfiguration. As machines are introduced or lost, the responsibilities for watching files are adjusted for the new system configuration.

File Placement Reconfiguration. As new machines become available, the monitors may decide to redistribute files. The reconfiguration distributes the storage needs and service requests more equally across machines in the storage cloud.
Files may also need to be repositioned to meet territory placement requirements.
As the storage cloud grows and changes with new hardware, new network connections, and configuration changes, the cloud storage system will constantly maintain the proper file distribution and placement.

Available

CloudIQ Storage provides extraordinary levels of availability due to the completely decentralized nature of the architecture. Every computer within the system is capable of serving file ingestion or file retrieval requests. Therefore, the total bandwidth in and out of the system is the aggregate of that of all of the machines participating in the cloud.
In addition, even though multiple copies of the file are present in the cloud storage system, the user gets only a single, logical view of the system. The CloudIQ Storage architecture resolves the multiple territories, copies and even versions of each file to deliver the requested file to the user.
When a file retrieval request arrives at a computer in a cloud storage system, several actions occur to provide the user with their data quickly:
File Location. The computer servicing a file request locates which machines in the cloud hold the requested file using consistent hashing and a distributed hash tables.
No single machine holds the entire file directory, as it would become a performance bottleneck or a point of failure. Lookups are a constant time operation that returns the machines within the system holding a copy of the file.
Machine Selection. Once the target machines holding the file have been identified, the requesting machine can choose which machine is optimal for retrieving the file.
This choice can be made based on factors such as network proximity and machine utilization.
File Retrieval. Once the machine is selected, the file can be retrieved by the client.
In addition to optimized read operations, the cloud storage solution provides “always writable” semantics using a concept called eventual consistency. In an eventually consistent system, write operations always succeed as long as the system can access the number of nodes required by policy for a successful write (one, by default). During this write operation, audit information is stored with the file’s metadata so that any additional copies or version reconciliation can be performed later. Eventually consistent systems are not right for every storage application, but it is ideal for “write once, read many” style systems.

The availability, reliability, and location awareness features of a cloud storage solution bring the highest level of disaster recovery available to a storage administrator.
The system can lose a machine, a rack, or even an entire data center and the system remains capable of performing all necessary operations for the user.
Manageable
Management and ease-of-use features are essential for the creation of a robust cloud storage system. When dealing with hundreds or thousands of machines, management operations must be simplified. CloudIQ Storage ensures this by providing the following
attributes:
Always Available Operation. Any configuration changes made to the system must not remove the availability of files. In the event that multiple machines need to be taken off line for updates, the system must have a strategy for keeping files available. This may be achieved using territories. If two territories hold copies of the same files, machines in one territory can temporarily be taken off-line for updates
and the second territory can serve the files. Any file updates performed during the downtime will be automatically reconciled using the monitoring and reliability features of the cloud.
Configurable Reliability Settings. Administrators can declare how many copies of a file should be stored in the storage cloud. A cloud-wide setting is established, which may be overridden on a file-by-file basis.
Real-Time Computer Injection. When the storage system needs more capacity, the administrator needs to be able to add machines without affecting the availability of any file.
Real-Time Computer Decommissioning. When it is decided that a computer is no longer required, the administrator needs operations to gracefully remove the computer from processing requests and move its files to other machines within the cloud.
Auditing. Important operations, events, and system messages need to be saved.
System-Wide Configuration Changes. All configuration changes need to propagate across all machines with a single operation.
Because management tasks in the storage cloud are virtualized and automated, a small number of system administrators can easily maintain a large number of computers storing petabytes of data.

Secure
CloudIQ Storage implements a flexible security model designed to allow multi tenant operation while ensuring the security of user data. Just as the physical storage cloud
may be partitioned into territories, the logical storage cloud may be partitioned into “spaces,” with each space representing a hierarchical collection of files under common control. Access to each space, and to each file within a space, is governed by an access control list (ACL) that assigns access rights for users, groups and the file’s
owner.
To facilitate secure operation of the cloud, administration rights to the system are divided into a series of distinct privileges that may be assigned to individual users or groups.

Traditional storage and computational offerings fail to meet the needs of today’s big data environments. These approaches have been characterized by isolated pools of expensive storage and the constant movement of data from where it lives to the application servers and users who need it. Attempting to meet demanding capacity and
performance requirements in this manner often necessitates the purchase of extremely costly, special-purpose hardware. Yet, local and wide-area network bottlenecks remain a challenge.
Petabyte-scale environments dictate the need for distributed, high-performing solutions that bring the work to the data, not the other way around. In this paper we have demonstrated how the cloud computing architectural approach provides the key to meeting the challenges posed by big data.
Appistry CloudIQ Storage is software that applies these principles to deliver robust private cloud storage environments. Storage clouds based on CloudIQ Storage exhibit essential characteristics  Appistry propose for any cloud storage system: individual resources self-organize without centralized control, yielding extreme scalability; the system spans data centers and optimizes behavior based on topology; high levels of reliability and availability are transparently ensured; system management is policy-based, automated and virtualized.

With the Hadoop Edition, Appistry hopes to “upgrade” the performance and availability of Hadoop-based applications by replacing the Hadoop Distributed File System (HDFS) with CloudIQ Storage. While Hadoop is wildly popular right now, one issue is its use of a “namenode” – a centralized metadata repository that can constrain performance and creates a single point of failure. Appistry’s approach retains Hadoop’s MapReduce engine to assign parallel-processing tasks, but attempts to resolve namenode problems with CloudIQ Storage’s wholly distributed architecture.

 Appistry  is with an intelligence-sector customer that has “massive, massive” applications built on HBase, a distributed NoSQL database with Hadoop at its core. Although CloudIQ Storage doesn’t formally support HBase, it has helped the customer improve database throughput, and formal support might be on the way. Because of their inherently scalable natures,  CloudIQ Storage and NoSQL databases are complementary solutions to handle structured and unstructured data.

The idea behind cloud storage is the same as the idea behind cloud computing: Organizations want to meet their ever-expanding storage needs as they arise, and they want to do so at lower price points than are available from incumbent vendors like EMC and NetApp. For customers in areas like social media, scientific imaging or film rendering, though, scale and price must be matched with performance. This is where companies like Appistry come in, but it certainly isn’t alone in the quest for these dollars. Startups Scale Computing, Pivot3, MaxiScale and ParaScale all have raised millions for their unique offerings, and HP last summer snatched IBRIX to boost its relevance in the performance-hungry film-rendering market.

Apple’s iCloud

Apple’s iCloud is most-used cloud service in the US, beating Dropbox & Amazon

With support built into every Mac and iOS device, Apple’s iCloud is the most-used cloud media service by U.S. consumers, a new survey has found.

Cloud

Strategy Analytics graphic via Engadget.

iCloud accounts for 27 percent of cloud customers in America, according to new data published Thursday by Strategy Analytics. That places Apple’s service ahead of second-place Dropbox, with 17 percent, and third-place Amazon, with 15 percent.

Apple’s rival Google comes in fourth with its Google Drive service, used by 10 percent of U.S. consumers. And in fifth is the cloud movie service Ultraviolet, used by just 4 percent of respondents.

The survey of 2,300 people found that cloud storage is particularly popular among people ages 20 to 24, and the most common use for cloud storage is music. Of those surveyed, 90 percent of iCLoud, Amazon and Google Drive users store music files in the cloud.

The story is different with Dropbox users, as 45 percent of them use the service to store music files.

“Music is currently the key battleground in the war for cloud domination,” said Ed Barton, director of Digital Media at Strategy Analytics. “Google is tempting users by giving away free storage for 20,000 songs which can be streamed to any Android device, a feature both Amazon and Apple charge annual subscriptions for.

“However, the growth of video streaming and the desire to access content via a growing range of devices will see services such as the Hollywood-backed digital movie initiative Ultraviolet – currently used by 4% of Americans – increase market share.”

In its quarterly earnings report in January, Apple revealed that it has more than 250 million active iCloud users, growing significantly from 190 million in October. Users are automatically prompted to open a free iCloud account with 5 gigabytes of storage when setting up a new iOS device.

IBM Smart Cloud computing

IBM SmartCloud Foundation is a set of technologies for building and managing virtualized infrastructures and private and hybrid clouds. Together these technologies can help build a fully functional cloud management system that aids business transformation and new service delivery. Individually, these technologies can help nearly any cloud project make quick and incremental progress towards a longer term cloud strategy.

open-cloud-hero

Featured Capabilities

IT service management

Service and IT asset management and process automation across the organization.

IBM® Service Delivery and Process Automation software gives you the visibility, control and automation needed to provide quality service delivery, addressing all stages of the service lifecycle.

Tivoli® Service Delivery and Process Automation software offerings provide a complete solution, automating the full lifecycle of service requests, incidents and trouble tickets from their creation through to the environmental changes they produce. Tivoli software is integrated to capture incoming requests, incidents and problems; route them to the correct decision-makers; and expedite resolution with enterprise-strength server and desktop provisioning tools. They do this while keeping an accurate record of all the configuration items in a federated management database and a real-time picture of the deployed infrastructure – matching hardware and software services with the business needs they fulfill.

By automating change, configuration, provisioning, release and asset management tasks, IBM Service Delivery and Process Automation software and services help reduce cost and eliminate error.

Common process automation platform combines asset and service management in one environment

IBM Service Delivery and Process Automation products leverage a common process automation engine. This engine is unique in its ability to:

  • Provide a self-service portal interface for reservation of computer, storage, and networking resources.
  • Automate provisioning and de-provisioning resources.
  • Increase availability of resources with real-time monitoring and energy management.
  • Combine asset and service management into one environment.
  • Deliver a federated configuration management system.
  • Provide advanced business process management and integration with other Web-based tools.
  • Offer full end-to-end management views of business applications.

With the implementation of IBM service delivery and process automation software solutions, clients can expect to improve the efficiency and effectiveness of IT, enable greater convergence of technology and business processes, and see results in areas like mean time to repair, service quality, and customer satisfaction.

With Tivoli You Can…

  • Optimize efficiency and accuracy of Service Delivery by automating best practices for common tasks, service requests, incident reports, and change and release management
  • Lower cost of management and compliance by discovering and tracking all deployed resources and their configurations, and matching them against established policies.
  • Improve productivity by giving users direct access to a catalog of automated service requests
  • Improve customer satisfaction through higher availability of critical business applications
  • Help control energy consumption in the data center by managing workloads and the provisioning/deprovisioning servers to meet SLAs.
  • Dynamically deploy, manage, secure and retire physical and virtual servers, clients and applications according to users’ needs & organizational guidelines

Anticipated Results

  • Improved resource utilization, resulting in 50% decrease in need for new additional equipment.
  • Labor savings of 10-20% (reduced man-hours due to task automation and software distribution)
  • Increased productivity of supported services by 10-25%
  • Improved success rate for change and release deployments by 10-30%
  • 10-20% reduction in deployed application rollbacks
  • 84% reduction in time taken to inventory physical and software assets
  • IT staff cost savings of $120 per PC/device/year through use of Packaging Tools and Automated Software Distribution
  • Reduced labor cost of 10-40% to maintain multiple configuration databases

Featured products

  • IBM Service Delivery Manager
    Enables businesses to rapidly implement a complete service management solution within a private cloud computing environment. Delivered as a pre-integrated software stack and deployed as a set of virtual images, it allows automated IT service deployment and provides rapid self-service provisioning, resource monitoring, cost management, and high availability of services in a cloud.
  • IBM Tivoli System Automation Application Manager
    Designed for high availability and disaster recovery solutions, providing the ability to automatically initiate, execute, and coordinate the starting, stopping, restarting and failing over of applications running in heterogeneous and virtual IT environments.
  • Tivoli Provisioning Manager
    Provides automated provisioning, improved resource utilization and enhanced IT service delivery.
  • Tivoli Change and Configuration Management Database
    Provides an enterprise-ready platform for storing deep, standardized data on configurations and change histories to help integrate people, processes, information and technology.
  • IBM Tivoli Service Request Manager
    Enables service efficiencies, reduces disruptions, streamlines service desk operations, improves customer satisfaction, and reduces costs by unifying key service support and asset management processes.
  • IBM Tivoli Workload Scheduler
    Enables automated workload management and monitoring across the enterprise, featuring a single console, self-healing capabilities, real-time alerts and reports.
  • Tivoli System Automation
    Protects business and IT services with end-to-end high availability, advanced policy-based automation, and single point control for heterogeneous environments.

Monitoring and performance management

Management and monitoring of application, middleware, server, and network environments for dynamic IT infrastructures.

Efficiently manage the cloud

In a 2012 IBM global study, CEOs ranked technology as the #1 factor impacting their organizations. And 90 percent of those CEOs viewed the cloud as critical to their plans. But as organizational demand for cloud services increases, so do the operational costs and the business risks. If managed incorrectly, this can result in revenue losses, performance degradation and more.

Monitoring and performance management solutions from IBM help you manage the cloud effectively. IBM monitoring and performance management solutions are designed to lower hardware and software costs and minimize performance risks by:

Provisioning and orchestration

Deployment and orchestration of virtual and cloud environments across the service delivery lifecycle.

Accelerate cloud service delivery

Organizations are increasingly turning to the cloud to accelerate the delivery of services and simplify the management of virtualized environments. But the cloud introduces new challenges. Provisioning workloads, controlling image sprawl and managing application deployment become much more complex in virtual and cloud environments.

Cloud provisioning and orchestration solutions from IBM are designed to reduce the IT management complexities introduced by virtual and cloud environments. This accelerates cloud service delivery allowing the enterprise to quickly respond to changing business needs―all while reducing operational costs in a heterogeneous hypervisor and hardware environment. IBM cloud provisioning and orchestration solutions do this by helping you:

Scheduling and systems automation

Cloud management with the added value of choice and automation above and beyond provisioning of virtual machines

IBM SmartCloud Enterprise+ is a fully managed, security-rich and production-ready cloud environment designed to ensure enterprise-class performance and availability. SCE+ offers complete governance, administration and management control along with service-level agreements (SLAs) to align your specific business and usage requirements. Multiple security and isolation options built into the virtual infrastructure and network keep this cloud separate from other cloud environments.

Transform and automate the provisioning of dynamic workloads using cloud services.

Highlights

  • Virtualization built into the Power platform, not bolted on, ensures optimal utilization of resources, efficiency, security and Enterprise Quality of Service for mission critical and compute intensive workloads
  • Scalability and resource elasticity improves workload availability and performance
  • Automated management, provisioning and service delivery decreases deployment times and increases flexibility and agility for faster responsiveness to changing business demands
  • Self-service portal and standardized service catalog provide consistent, reliable and responsive service delivery for improved customer satisfaction
  • Insight into resource utilization provides cost transparency and empowers IT organizations to direct costs back to the business.

As the world changes and IT plays an increasingly critical role, all types of organizations, businesses, and governments are seeking to transform the way they deliver IT services and improve operational efficiency so they can quickly respond to changing business demands. Cloud computing can improve asset utilization, workload optimization and service delivery while reducing complexity and delivering superior IT economics.

Traditional IT Infrastructure presents challenges on many levels. It is typically:

  • Composed of silos that lead to infrastructure disconnected from the priorities of the business
  • Static islands of computing resources which result in inefficiencies and underutilized assets
  • Struggling with rapid data growth, regulatory compliance, information integrity and security concerns—all while trying to control continuously rising IT costs
  • Inflexible in the face of rapid, unprecedented changes in markets, service demands and stakeholder expectations

As a result of the challenges traditional IT infrastructures are facing, organizations are looking towards a Smarter Computing infrastructure to meet the demand for a service delivery model that enables growth and innovation while lowering overall IT costs. A cloud computing environment built with IBM® Power Systems™ helps organizations transform their data centers to meet these challenges. Power Systems cloud solutions:

  • Deliver an integrated virtualization foundation for uncompromised efficiency, maximum utilization, and the ability to scale up or down in line with business needs
  • Address the information challenge by delivering flexible and secure access to critical information where it is needed and meets the highest standards for risk management and compliance mandates
  • Redistribute the IT budget through advanced virtualization, automation and datacenter analytics
  • Utilize flexible delivery models to greatly simplify IT service delivery while providing enterprise QOS capabilities including continuous application availability, optimized performance, more scalability and enterprise-class security

IBM Power Systems cloud solutions can help customers quickly build a powerful, dynamic, and efficient cloud computing environment enabling them to reduce IT costs, improve service delivery, and foster business innovation.

Storage management backup and recovery

Automate backup and restore, centralize storage management and ensure efficient movement and retention of data.

Benefits of VDI ( Virtual Desktop Infrastructure )

desktop

If your organization is interested in optimizing its workstation requisition and maintenance routine, it should consider adopting a virtual desktop infrastructure (VDI). VDI is the practice of hosting workstation operating systems and applications on a server. Users can access the “virtualized” operating systems and applications from thin clients, terminals, workstations, tablets, smart phones, etc., as long as the devices can connect to the host server. Because the operating systems and applications are “virtualized” they can be accessed by devices running on different operating systems such as Android, Linux, Microsoft, etc.

VDI benefits

Depending on the nature of your organization’s IT infrastructure and on the VDI solution that your organization chooses to implement, it can take advantage of a great number of benefits. VDI solutions typically have unique features that will appeal to specific organizational needs, but most VDI solutions will, at the least, provide the following benefits:

  • Quick and easy “workstation” provisioning: Once your organization’s IT team has created a virtualized workstation with an operating system, applications, and security configurations, the virtualized workstation can be used as a template that can be reproduced any time a user needs a new workstation. Copying a template to create workstations as needed can save time and allow users to be productive instead of waiting around for your IT staff to build a computer, install software, and patches.
  • Centralized patch management: Patch management is always an IT nightmare. While some programs like Microsoft Windows and antivirus can be configured on individual workstations to auto-update, other applications like Java have to be manually downloaded and installed. With a VDI solution, because all machines are hosted on one server, it is easy for your organization’s IT staff to ensure that all patches are updated in a timely manner.
  • Standardized security configurations: Because new security threats are discovered every day, your organization’s IT staff likely spends a lot of time updating security patches and maintaining standardized security settings on individual workstations. With a VDI solution, the IT staff can quickly update security patches to all virtualized workstations and ensure that security settings are standardized on all virtualized workstations.
  • Secured data: With a VDI solution, sensitive data is secure because all workstations are virtualized and hosted on servers or hosted data centers. Sensitive data can be created and worked with from numerous computing devices, but it will never reside on the device since the host server houses the virtualized workstation and provides the operating system, applications, data, and processing power. In the event that one of your organization’s laptops, tablets, or smartphones is lost or stolen, your organization won’t have to worry about data exposure because the data is not stored on the devices.
  • Anywhere access of virtualized workstations: If your organization adopts a VDI solution, it will realize increased productivity because users can access their virtualized workstations from home, work, or vacation from many computing devices such as smart phones, tablets, and laptops. Users can safely use personal devices like home computers to access sensitive organizational data because virtual workstations are isolated from the personal devices hard drives.

Learn more

Your organization can realize many benefits from adopting a VDI solution from a reputable vendor such as VMware or Microsoft. To learn more about VDI in general, or to discover how a VDI solution can benefit your organization, please contact the VDI experts at All Covered.

Plan

Cloud readiness assessment, ROI and migration strategies should be clear as you embark on the cloud journey. IBM offers a host of services such as IBM Strategy and Change Services for Cloud Adoption (US) and IBM Strategy and Design Services for Cloud Infrastructure (US) to help clients develop a cloud roadmap.

Assess and build a security roadmap with IBM Professional Security Services – cloud security assessment (US) and cloud security strategy roadmap (US).

Build
Accelerate your application development and test efforts with the IBM SmartCloud Enterprise. Realize cost savings and faster time to value in your private cloud environment.Provide anytime, anywhere access to applications, information and resources with the IBM Smart Desktop Cloud.IBM Cloud Service Provider Platform (CSP2) (US) accelerates and simplifies deployment of a complete cloud services environment.
Deliver
Unleash employee potential with world-class social networking services and on-line collaboration tools including file sharing, web conferencing and instant messaging with IBM LotusLive™ Collaboration Suite.IBM Information Protection Services – managed backup cloud (US) is a cloud-based service that enables security-rich, managed protection of your critical business data.Get fast and flexible SaaS and cloud-based application integration with your existing IT environment with Cast Iron Systems (US).

Amazon EC2 (Elastic Compute Cloud)

Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides resizable compute capacity in the cloud. It is designed to make web-scale computing easier for developers.

Amazon EC2’s simple web service interface allows you to obtain and configure capacity with minimal friction. It provides you with complete control of your computing resources and lets you run on Amazon’s proven computing environment. Amazon EC2 reduces the time required to obtain and boot new server instances to minutes, allowing you to quickly scale capacity, both up and down, as your computing requirements change. Amazon EC2 changes the economics of computing by allowing you to pay only for capacity that you actually use. Amazon EC2 provides developers the tools to build failure resilient applications and isolate themselves from common failure scenarios.
Get Started with
AWS for Free Sign Up Now »

AWS Free Tier includes 750 hours of Linux or Windows Micro Instances each month for one year. To stay within the Free Tier, use only EC2 Micro instances.
View AWS Free Tier Details »

Instagram can drive data to its computing systems on Amazon EC2 20 times as fast with solid-state drives.

This page contains the following categories of information. Click to jump down:

 

Amazon EC2 Functionality

Amazon EC2 presents a true virtual computing environment, allowing you to use web service interfaces to launch instances with a variety of operating systems, load them with your custom application environment, manage your network’s access permissions, and run your image using as many or few systems as you desire.

To use Amazon EC2, you simply:

Select a pre-configured, templated Amazon Machine Image (AMI) to get up and running immediately. Or create an AMI containing your applications, libraries, data, and associated configuration settings.
Configure security and network access on your Amazon EC2 instance.
Choose which instance type(s) you want, then start, terminate, and monitor as many instances of your AMI as needed, using the web service APIs or the variety of management tools provided.
Determine whether you want to run in multiple locations, utilize static IP endpoints, or attach persistent block storage to your instances.
Pay only for the resources that you actually consume, like instance-hours or data transfer.

Service Highlights

Elastic – Amazon EC2 enables you to increase or decrease capacity within minutes, not hours or days. You can commission one, hundreds or even thousands of server instances simultaneously. Of course, because this is all controlled with web service APIs, your application can automatically scale itself up and down depending on its needs.

Completely Controlled – You have complete control of your instances. You have root access to each one, and you can interact with them as you would any machine. You can stop your instance while retaining the data on your boot partition and then subsequently restart the same instance using web service APIs. Instances can be rebooted remotely using web service APIs. You also have access to console output of your instances.

Flexible – You have the choice of multiple instance types, operating systems, and software packages. Amazon EC2 allows you to select a configuration of memory, CPU, instance storage, and the boot partition size that is optimal for your choice of operating system and application. For example, your choice of operating systems includes numerous Linux distributions, and Microsoft Windows Server.

Designed for use with other Amazon Web Services – Amazon EC2 works in conjunction with Amazon Simple Storage Service (Amazon S3), Amazon Relational Database Service (Amazon RDS), Amazon SimpleDB and Amazon Simple Queue Service (Amazon SQS) to provide a complete solution for computing, query processing and storage across a wide range of applications.

Reliable – Amazon EC2 offers a highly reliable environment where replacement instances can be rapidly and predictably commissioned. The service runs within Amazon’s proven network infrastructure and datacenters. The Amazon EC2 Service Level Agreement commitment is 99.95% availability for each Amazon EC2 Region.
Secure – Amazon EC2 provides numerous mechanisms for securing your compute resources.

Amazon EC2 includes web service interfaces to configure firewall settings that control network access to and between groups of instances.
When launching Amazon EC2 resources within Amazon Virtual Private Cloud (Amazon VPC), you can isolate your compute instances by specifying the IP range you wish to use, and connect to your existing IT infrastructure using industry-standard encrypted IPsec VPN. You can also choose to launch Dedicated Instances into your VPC. Dedicated Instances are Amazon EC2 Instances that run on hardware dedicated to a single customer for additional isolation.
For more information on Amazon EC2 security refer to our Amazon Web Services: Overview of Security Process document.

Inexpensive – Amazon EC2 passes on to you the financial benefits of Amazon’s scale. You pay a very low rate for the compute capacity you actually consume. See Amazon EC2 Instance Purchasing Options for a more detailed description.

On-Demand Instances – On-Demand Instances let you pay for compute capacity by the hour with no long-term commitments. This frees you from the costs and complexities of planning, purchasing, and maintaining hardware and transforms what are commonly large fixed costs into much smaller variable costs. On-Demand Instances also remove the need to buy “safety net” capacity to handle periodic traffic spikes.
Reserved Instances – Reserved Instances give you the option to make a low, one-time payment for each instance you want to reserve and in turn receive a significant discount on the hourly charge for that instance. There are three Reserved Instance types (Light, Medium, and Heavy Utilization Reserved Instances) that enable you to balance the amount you pay upfront with your effective hourly price. The Reserved Instance Marketplace is also available, which provides you with the opportunity to sell Reserved Instances if your needs change (i.e. want to move instances to a new AWS Region, change to a new instance type, or sell capacity for projects that end before your Reserved Instance term expires).
Spot Instances – Spot Instances allow customers to bid on unused Amazon EC2 capacity and run those instances for as long as their bid exceeds the current Spot Price. The Spot Price changes periodically based on supply and demand, and customers whose bids meet or exceed it gain access to the available Spot Instances. If you have flexibility in when your applications can run, Spot Instances can significantly lower your Amazon EC2 costs.

Easy to Start – Quickly get started with Amazon EC2 by visiting AWS Marketplace to choose preconfigured software on Amazon Machine Images (AMIs). You can quickly deploy this software to EC2 via 1-Click launch or with the EC2 console.

Features

Amazon EC2 provides a number of powerful features for building scalable, failure resilient, enterprise class applications, including:

Amazon Elastic Block Store – Amazon Elastic Block Store (EBS) offers persistent storage for Amazon EC2 instances. Amazon EBS volumes are network-attached, and persist independently from the life of an instance. Amazon EBS volumes are highly available, highly reliable volumes that can be leveraged as an Amazon EC2 instance’s boot partition or attached to a running Amazon EC2 instance as a standard block device. When used as a boot partition, Amazon EC2 instances can be stopped and subsequently restarted, enabling you to only pay for the storage resources used while maintaining your instance’s state. Amazon EBS volumes offer greatly improved durability over local Amazon EC2 instance stores, as Amazon EBS volumes are automatically replicated on the backend (in a single Availability Zone). For those wanting even more durability, Amazon EBS provides the ability to create point-in-time consistent snapshots of your volumes that are then stored in Amazon S3, and automatically replicated across multiple Availability Zones. These snapshots can be used as the starting point for new Amazon EBS volumes, and can protect your data for long term durability. You can also easily share these snapshots with co-workers and other AWS developers. Amazon EBS provides two volume types: Standard volumes and Provisioned IOPS volumes. Standard volumes offer cost effective storage that is ideal for applications with moderate or bursty I/O requirements. Provisioned IOPS volumes are designed to deliver predictable, high performance for I/O intensive applications such as databases. See Amazon Elastic Block Store for more details.

EBS-Optimized Instances – For a low, additional, hourly fee, customers can launch selected Amazon EC2 instances types as “EBS-Optimized” instances. EBS-Optimized instances enable Amazon EC2 instances to fully utilize the IOPS provisioned on an EBS volume. EBS-Optimized instances deliver dedicated throughput between Amazon EC2 and Amazon EBS, with options between 500 Mbps and 1000 Mbps depending on the instance type used. When attached to EBS-Optimized instances, Provisioned IOPS volumes are designed to deliver within 10% of their provisioned performance 99.9% of the time. See Amazon EC2 Instance Types to find out more about instance types that can be launched as EBS-Optimized instances.

Multiple Locations – Amazon EC2 provides the ability to place instances in multiple locations. Amazon EC2 locations are composed of Regions and Availability Zones. Availability Zones are distinct locations that are engineered to be insulated from failures in other Availability Zones and provide inexpensive, low latency network connectivity to other Availability Zones in the same Region. By launching instances in separate Availability Zones, you can protect your applications from failure of a single location. Regions consist of one or more Availability Zones, are geographically dispersed, and will be in separate geographic areas or countries. The Amazon EC2 Service Level Agreement commitment is 99.95% availability for each Amazon EC2 Region. Amazon EC2 is currently available in nine regions: US East (Northern Virginia), US West (Oregon), US West (Northern California), EU (Ireland), Asia Pacific (Singapore), Asia Pacific (Tokyo), Asia Pacific (Sydney), South America (Sao Paulo), and AWS GovCloud.

Elastic IP Addresses – Elastic IP addresses are static IP addresses designed for dynamic cloud computing. An Elastic IP address is associated with your account not a particular instance, and you control that address until you choose to explicitly release it. Unlike traditional static IP addresses, however, Elastic IP addresses allow you to mask instance or Availability Zone failures by programmatically remapping your public IP addresses to any instance in your account. Rather than waiting on a data technician to reconfigure or replace your host, or waiting for DNS to propagate to all of your customers, Amazon EC2 enables you to engineer around problems with your instance or software by quickly remapping your Elastic IP address to a replacement instance. In addition, you can optionally configure the reverse DNS record of any of your Elastic IP addresses by filling out this form.

Amazon Virtual Private Cloud – Amazon VPC is a secure and seamless bridge between a company’s existing IT infrastructure and the AWS cloud. Amazon VPC enables enterprises to connect their existing infrastructure to a set of isolated AWS compute resources via a Virtual Private Network (VPN) connection, and to extend their existing management capabilities such as security services, firewalls, and intrusion detection systems to include their AWS resources. See Amazon Virtual Private Cloud for more details.

Amazon CloudWatch – Amazon CloudWatch is a web service that provides monitoring for AWS cloud resources and applications, starting with Amazon EC2. It provides you with visibility into resource utilization, operational performance, and overall demand patterns—including metrics such as CPU utilization, disk reads and writes, and network traffic. You can get statistics, view graphs, and set alarms for your metric data. To use Amazon CloudWatch, simply select the Amazon EC2 instances that you’d like to monitor. You can also supply your own business or application metric data. Amazon CloudWatch will begin aggregating and storing monitoring data that can be accessed using web service APIs or Command Line Tools. See Amazon CloudWatch for more details.

Auto Scaling – Auto Scaling allows you to automatically scale your Amazon EC2 capacity up or down according to conditions you define. With Auto Scaling, you can ensure that the number of Amazon EC2 instances you’re using scales up seamlessly during demand spikes to maintain performance, and scales down automatically during demand lulls to minimize costs. Auto Scaling is particularly well suited for applications that experience hourly, daily, or weekly variability in usage. Auto Scaling is enabled by Amazon CloudWatch and available at no additional charge beyond Amazon CloudWatch fees. See Auto Scaling for more details.

Elastic Load Balancing – Elastic Load Balancing automatically distributes incoming application traffic across multiple Amazon EC2 instances. It enables you to achieve even greater fault tolerance in your applications, seamlessly providing the amount of load balancing capacity needed in response to incoming application traffic. Elastic Load Balancing detects unhealthy instances within a pool and automatically reroutes traffic to healthy instances until the unhealthy instances have been restored. You can enable Elastic Load Balancing within a single Availability Zone or across multiple zones for even more consistent application performance. Amazon CloudWatch can be used to capture a specific Elastic Load Balancer’s operational metrics, such as request count and request latency, at no additional cost beyond Elastic Load Balancing fees. See Elastic Load Balancing for more details.

High Performance Computing (HPC) Clusters – Customers with complex computational workloads such as tightly coupled parallel processes, or with applications sensitive to network performance, can achieve the same high compute and network performance provided by custom-built infrastructure while benefiting from the elasticity, flexibility and cost advantages of Amazon EC2. Cluster Compute, Cluster GPU, and High Memory Cluster instances have been specifically engineered to provide high-performance network capability and can be programmatically launched into clusters – allowing applications to get the low-latency network performance required for tightly coupled, node-to-node communication. Cluster instances also provide significantly increased throughput making them well suited for customer applications that need to perform network-intensive operations. Learn more about how Amazon EC2 and other AWS services can be used for HPC Applications.

High I/O Instances – Customers requiring very high, low latency, random I/O access to their data can benefit from High I/O instances. High I/O instances are an Amazon EC2 instance type that can provide customers with random I/O rates over 100,000 IOPS. High I/O instances are backed by Solid State Disk (SSD) technology and are ideally suited for customers running very high performance NoSQL and relational databases. See Amazon EC2 Instance Types to find out more about High I/O instances.

High Storage Instances – Customers requiring very high storage density per instance, and high sequential I/O for data-intensive applications like data warehousing and Hadoop can benefit from High Storage instances. High Storage instances are an Amazon EC2 instance type that can provide customers with sequential I/O throughout of 2.4 GB/s and provide customers with 48 TB of instance storage across 24 hard disk drives. See Amazon EC2 Instance Types to find out more about High Storage instances.

VM Import/Export – VM Import/Export enables you to easily import virtual machine images from your existing environment to Amazon EC2 instances and export them back at any time. By importing virtual machines as ready to use EC2 instances, you can leverage your existing investments in virtual machines that meet your IT security, configuration management, and compliance requirements. You can export your previously imported EC2 instances back to your on-premise environment at any time. This offering is available at no additional charge beyond standard usage charges for Amazon EC2 and Amazon S3. Learn more about VM Import/Export.

AWS Marketplace – AWS Marketplace is an online store that helps you find, buy and quickly deploy software that runs on AWS. You can use AWS Marketplace’s 1-Click deployment to quickly launch pre-configured software and be charged for what you use, by the hour or month. AWS handles billing and payments, and software charges appear on your AWS bill. Learn more about AWS Marketplace.

Instance Types
Standard Instances
First Generation

First generation (M1) Standard instances provide customers with a balanced set of resources and a low cost platform that is well suited for a wide variety of applications.

M1 Small Instance (Default) 1.7 GiB of memory, 1 EC2 Compute Unit (1 virtual core with 1 EC2 Compute Unit), 160 GB of local instance storage, 32-bit or 64-bit platform
M1 Medium Instance 3.75 GiB of memory, 2 EC2 Compute Units (1 virtual core with 2 EC2 Compute Units each), 410 GB of local instance storage, 32-bit or 64-bit platform
M1 Large Instance 7.5 GiB of memory, 4 EC2 Compute Units (2 virtual cores with 2 EC2 Compute Units each), 850 GB of local instance storage, 64-bit platform
M1 Extra Large Instance 15 GiB of memory, 8 EC2 Compute Units (4 virtual cores with 2 EC2 Compute Units each), 1690 GB of local instance storage, 64-bit platform

Second Generation

Second generation (M3) Standard instances provide customers with a balanced set of resources and a higher level of processing performance compared to First Generation Standard instances. Instances in this family are ideal for applications that require higher absolute CPU and memory performance. Examples of applications that will benefit from the performance of Second Generation Standard instances include encoding, high traffic content management systems, and memcached.

M3 Extra Large Instance 15 GiB of memory, 13 EC2 Compute Units (4 virtual cores with 3.25 EC2 Compute Units each), EBS storage only, 64-bit platform
M3 Double Extra Large Instance 30 GiB of memory, 26 EC2 Compute Units (8 virtual cores with 3.25 EC2 Compute Units each), EBS storage only, 64-bit platform

Micro Instances

Micro instances (t1.micro) provide a small amount of consistent CPU resources and allow you to increase CPU capacity in short bursts when additional cycles are available. They are well suited for lower throughput applications and web sites that require additional compute cycles periodically. You can learn more about how you can use Micro instances and appropriate applications in the Amazon EC2 documentation.

Micro Instance 613 MiB of memory, up to 2 ECUs (for short periodic bursts), EBS storage only, 32-bit or 64-bit platform

High-Memory Instances

Instances of this family offer large memory sizes for high throughput applications, including database and memory caching applications.

High-Memory Extra Large Instance 17.1 GiB memory, 6.5 ECU (2 virtual cores with 3.25 EC2 Compute Units each), 420 GB of local instance storage, 64-bit platform
High-Memory Double Extra Large Instance 34.2 GiB of memory, 13 EC2 Compute Units (4 virtual cores with 3.25 EC2 Compute Units each), 850 GB of local instance storage, 64-bit platform
High-Memory Quadruple Extra Large Instance 68.4 GiB of memory, 26 EC2 Compute Units (8 virtual cores with 3.25 EC2 Compute Units each), 1690 GB of local instance storage, 64-bit platform

High-CPU Instances

Instances of this family have proportionally more CPU resources than memory (RAM) and are well suited for compute-intensive applications.

High-CPU Medium Instance 1.7 GiB of memory, 5 EC2 Compute Units (2 virtual cores with 2.5 EC2 Compute Units each), 350 GB of local instance storage, 32-bit or 64-bit platform
High-CPU Extra Large Instance 7 GiB of memory, 20 EC2 Compute Units (8 virtual cores with 2.5 EC2 Compute Units each), 1690 GB of local instance storage, 64-bit platform

Cluster Compute Instances

Instances of this family provide proportionally high CPU resources with increased network performance and are well suited for High Performance Compute (HPC) applications and other demanding network-bound applications. You can learn more about Cluster instance concepts by reading the Amazon EC2 documentation. For more information about specific use cases and cluster management options for HPC, please visit the HPC solutions page.

Cluster Compute Eight Extra Large 60.5 GiB memory, 88 EC2 Compute Units, 3370 GB of local instance storage, 64-bit platform, 10 Gigabit Ethernet

High Memory Cluster Instances

Instances of this family provide proportionally high CPU and memory resources with increased network performance, and are well suited for memory-intensive applications including in-memory analytics, graph analysis, and scientific computing. You can learn more about Cluster instance concepts by reading the Amazon EC2 documentation. For more information about specific use cases and cluster management options for HPC, please visit the HPC solutions page.

High Memory Cluster Eight Extra Large 244 GiB memory, 88 EC2 Compute Units, 240 GB of local instance storage, 64-bit platform, 10 Gigabit Ethernet

Cluster GPU Instances

Instances of this family provide general-purpose graphics processing units (GPUs) with proportionally high CPU and increased network performance for applications benefitting from highly parallelized processing, including HPC, rendering and media processing applications. While Cluster Compute Instances provide the ability to create clusters of instances connected by a low latency, high throughput network, Cluster GPU Instances provide an additional option for applications that can benefit from the efficiency gains of the parallel computing power of GPUs over what can be achieved with traditional processors. Learn more about use of this instance type for HPC applications.

Cluster GPU Quadruple Extra Large 22 GiB memory, 33.5 EC2 Compute Units, 2 x NVIDIA Tesla “Fermi” M2050 GPUs, 1690 GB of local instance storage, 64-bit platform, 10 Gigabit Ethernet

High I/O Instances

Instances of this family provide very high disk I/O performance and are ideally suited for many high performance database workloads. High I/O instances provide SSD-based local instance storage, and also provide high levels of CPU, memory and network performance. For more information about specific use cases and Big Data options on AWS, please visit the Big Data solutions page.

High I/O Quadruple Extra Large 60.5 GiB memory, 35 EC2 Compute Units, 2 * 1024 GB of SSD-based local instance storage, 64-bit platform, 10 Gigabit Ethernet

High Storage Instances

Instances of this family provide proportionally higher storage density per instance, and are ideally suited for applications that benefit from high sequential I/O performance across very large data sets. High Storage instances also provide high levels of CPU, memory and network performance.

High Storage Eight Extra Large 117 GiB memory, 35 EC2 Compute Units, 24 * 2 TB of hard disk drive local instance storage, 64-bit platform, 10 Gigabit Ethernet

EC2 Compute Unit (ECU) – One EC2 Compute Unit (ECU) provides the equivalent CPU capacity of a 1.0-1.2 GHz 2007 Opteron or 2007 Xeon processor.

See Amazon EC2 Pricing for details on costs for each instance type.

See Amazon EC2 Instance Types for a more detailed description of the differences between the available instance types, as well as a complete description of an EC2 Compute Unit.

Operating Systems and Software
Operating Systems

Amazon Machine Images (AMIs) are preconfigured with an ever-growing list of operating systems. We work with our partners and community to provide you with the most choice possible. You are also empowered to use our bundling tools to upload your own operating systems. The operating systems currently available to use with your Amazon EC2 instances include:
Operating Systems
Red Hat Enterprise Linux Windows Server Oracle Enterprise Linux
SUSE Linux Enterprise Amazon Linux AMI Ubuntu
Fedora Gentoo Linux Debian

Software

AWS Marketplace features a wide selection of commercial and free software from well-known vendors, designed to run on your EC2 instances. A sample of products are below. To see the full selection, visit AWS Marketplace.
Databases Application Servers
Microsoft SQL Server Standard Amazon EC2 Running IBM WebSphere Application Server
MongoDB Tomcat Java Web Application Deployment provided by JumpBox
Acunu Storage Platform Standard Edition w/ Apache Cassandra Tomcat on Apache – Java Servlet and JSP Platform by TurnKey Linux
TurnKey PostgreSQL – Object-relational Database System Zend Server (Clustered) w/Gold Support
Couchbase Server – Enterprise Standard

Content Management Business Intelligence
WordPress provided by BitNami SAP BusinessObjects 10 Named User License
Drupal 6 – Content Management Framework provided by TurnKey Linux JasperReports Server Community Edition
MediaWiki Wiki System provided by JumpBox