If you want to be a cloud provider with a complete portfolio, that means you need to offer Infrastructure as a Service. That is where some cloud providers start(ed) – such as Amazon WebServices – and it is where Oracle completes its stack of cloud service offerings. Compute, Storage and Networking are the primary elements of IaaS for Oracle. Even though Oracle offers ordinary compute and storage, it can really make a difference when the service is not so ordinary – because of the scale, the level of security and isolation and the sheer functionality. Oracle claims it wants to compete with the cloud scale IaaS vendors, including AWS and Azure – on functionality and even on price.
I understand that in order to cater for customers that do SaaS and PaaS – where Oracle can distinguish itself with its software and platform – and also need a bit of IaaS, Oracle wants to offer infrastructure services. However, I am not quite sure why it should want to complete on IaaS services per se. Storage is storage and Oracle’s TBs of storage can hardly be better than someone else’s. Given the current scale of Oracle’s cloud operations, it is very unlikely that Oracle can offer these services at a better price point than AWS or Google and still make a healthy margin. Of course the IaaS services are first and foremost required by Oracle itself: the PaaS products run on top of the IaaS offerings and the SaaS services run on PaaS. IaaS has always been there in the Oracle cloud – but largely under the hood.
However, the official position of Oracle is: when you need pure IaaS, we can offer you the same or better deal that AWS can. One of the taglines: “elastic infrastructure at low cost”; another: “Always-On Security and Fault-Tolerant Reliability at Commodity Prices”. Following up on the announcements from last year, OOW 15 added a lot of detail and substance in the area of IaaS services.
Elastic Compute from Oracle is designed for business-critical workloads, maintaining only the
highest level of security, high-availability, flexibility, and control. It is not your every day, run of the mill compute service. It can run Oracle Enterprise Linux, Windows, Ubuntu and Red Hat Linux. Various compute shapes are available, starting at one quarter OCPU with 1.8 GB RAM and stepwise increasing to 16 OCPU with 120 GB RAM. However it seems that Compute can only be acquired in bulk, from 50 OCPUs/Month upwards. The Oracle website quotes $3750 as the price tag for this smallest quantity on offer.
In addition to the common multitenant architecture, where a compute unit is a virtual machine that shares underlying infrastructure with other tenants with the possibility of some interference between tenants, Oracle offers dedicated compute. With dedicated compute, hardware units and network zones [with site-to-site VPN]are reserved exclusively for one tenant, resulting in even more security and predictable performance. Not surprisingly, the dedicated compute offerings has a higher entry level, of 500 OCPUs/month for $50k- and it may also require a longer term subscription.
You get access to Oracle Compute Cloud Service through the REST API, Python CLI, and a web-based UI. Virtual machine (VM) instances are provisioned in minutes through a self-service portal.
Consumers of Oracle Elastic Compute have complete control. They assign users into groups, along with their permissions, so that users’ activity and access are policy-based. Customers can use a distributed and flexible firewall that allows the isolation of groups of objects (for example, VM instances and storage volumes) so that only specifically permitted communications are enabled. This distributed firewall operates on a flat network without the need for hard network partitioning. It isn’t restricted by location, and operates across the cloud (that is, regardless of nodes, racks, or sites). Security rules identify the permitted communication between security lists or between IP addresses; communication is either permitted or denied.
Customers of the Elastic Compute Cloud Service have complete visibility over resource usage and network traffic in the cloud. Cloud Watch – a UI for application orchestration – monitors all objects defined within it, maintaining high availability and taking care of elastic scale up an scale down. If there’s a failure, then an automatic restart of the failed object on another computing node ensures that the application is successfully restored.
At the really high end of the compute shape scale is a new offering from Oracle – Engineered Systems as a Service or simply Exadata as a Service. Instead of typical small or medium sized compute shapes, this service offers compute capacity in very sizable chunks (which are the equivalent of a quarter rack Exadata): 28 to 68 cores, ½ TB memory, 19.2 TB flash storage and 42 TB disk storage. In terms of cutting-edge hardware, Oracle Exadata employs high-performance scale-out database servers and scale-out intelligent storage servers, connected by an ultrafast, low-latency InfiniBand network. Along with high-capacity disks, Oracle Exadata includes state-of-the-art PCI flash storage, which delivers very high throughput and short response times. That kind of infrastructure can be used for high end, enterprise database workloads – or other types of workloads.
With this offering, Oracle is upping the ante in online database services by making available in the cloud its highest-performing database engineered system. This service brings a level of performance previously unavailable to cloud customers, who have had to settle for databases with limited functionality running on generic cloud infrastructures. Customers can employ Oracle [Database Cloud] Exadata Service not only for mission-critical production databases, but also for relatively simple dev/test duties. They can even just try it out before buying an on-premises system.
Another primary Infrastructure service in anyone’s cloud obviously is storage. The ability to persist and retrieve blocks of data. Every service Oracle is offering on the cloud needs this – and it is a service in its own right. Well, actually a few of them.
The simples storage service is the oldest as well in the Oracle stable and probably in any cloud vendor’s stable too. The Object Storage allows easy write and read access – intended for living data. You pay by the storage capacity (GB, TB) per month as well as the outbound (read) data transfer in volume (GB, TB) per month. To give you an idea: storage comes at a few dollar cents per GB per month; outbound data traffic – that means: out of the Oracle Public Cloud –is probably more relevant to consider at close to a dime per GB per month.
Data that is really cold in terms of being accessed (as opposed to hot data that is used all the time), but cannot be actually discarded because of regulatory reasons can be put in Archival Storage – at $1 per TB per month. Oracle claims this is the lowest cost per gigabyte in the industry. The service has on-demand capacity, scales to petabytes of data, retains multiple redundant data copies for the highest availability, can encrypt all data at rest for security does automatic data integrity checks for durability and offers industry standard RESTful APIs for writing and retrieving data. The idea is that this data is accessed hardly at all; only for recovery, audit investigations and other exceptional cases will this data be read.
The Oracle Hierarchical Storage Manager (HSM) – which was renamed earlier this year from SAM QFS – was launched as the facility that manages data across various tiers of storage, according to policies based on access patterns, sizes, age of the data. The HSM leverages the All Flash Storage (All Flash FS) device, disk (ZFS) and tape archives (StorageTek) and for the lowest access requirements the Archive Cloud Services. Access to data is provided, transparent to the business as to which tier actually the data was pulled from. The Oracle HSM is OpenStack Swift-compliant, meaning that it can provide the implementation for the Swift storage node in OpenStack implementations. Oracle boasts of proven scalability, greater than all competitors, with customers managing up to 40 PB using OHSM.
On top of plain storage and archive, Oracle offers richer services around File and Database back up (& recovery), an easy alternative to writing, shipping, and storing backup tapes at an off-site location. These services understand that the bytes stored are for backup purposes, have a certain format and need be accessed in a certain way and scenario. Some smartness is built in to recognize delta records and understand version management of backup, for example with backups created with Oracle RMAN or Symantec NetBackup. Oracle provides an OpenStack Swift API and also an S3 API; the latter means that any backup mechanism that can talk to the Amazon S3 service, can also talk to Oracle’s file and database backup service.
For Oracle Database backup, administrators use the familiar RMAN (Oracle Recovery Manager) interface to perform backup and restore operations, so there’s no need to learn new tools or commands. The Oracle Database Cloud Backup Module has to be installed, a few RMAN settings configured and then you’re ready to back up to the cloud using familiar RMAN commands. Note: this module is what makes it possible to perform cloud backups and restores.
A new way to transfer the data into the cloud is bulk data transfer. This is what you would do when you initially migrate existing systems and existing data storage into the cloud, and you don’t feel like piping many TBs or even PBs of data to the cloud. Oracle will ship a physical storage device onto which you can load your PB datasets. You ship the device back to Oracle, Oracle will load the data into your cloud service and clear the device. Note: this announcement comes shortly after Amazon’s announcement of a new service they called snowball: https://aws.amazon.com/importexport/
Clearly there is a lot of stake when linking up multiple on premises sites to multiple cloud services and data centers especially when high data volumes and high performance requirements are involved. No Cloud can function without network connectivity, internally and externally. With services around network, any cloud vendor can first of all make or break its own business and second of all really distinguish itself. In addition to connections at reasonable bandwidths and the ability to acquire static IP addresses for virtual machines, customers demand secure, isolated connections between on premises and cloud – and also big pipes at much higher bandwidths for much lower latency.
With Software Defined Networking – probably based on its 2014 Corente acquisition – Oracle intends to provide a number of services around VPN connections, interconnections between cloud compute machines and other cloud services and multi-site on premises to cloud interactions. See this document for in depth details about Oracle SDN in general.
Oracle Network Cloud Service offers VPN for Dedicated Compute, which is touted as a cost effective solution to expand your network with data secured while traversing through public internet and ability to connect multiple sites to Oracle Dedicated Compute zone.
The FastConnect service also provides connectivity with reduced security risks, using standard Layer3 connectivity with BGP routing offering bidirectional transfer of large volume data and more predictable network performance through deterministic routing to Oracle Cloud.
The networking services also leverages the Equinix cloud exchanges to offer a big pipe connection between on premises and cloud. Equinix does the same thing for Azure and AWS – as is shown in this illustration:
Users will get quicker response times and better performance as data is accelerated through Cloud Exchanges in Amsterdam, Chicago, London, Singapore, Sydney and Washington data centers. It should also provide a better framework to support the hybrid cloud systems that most enterprises run, as well as solid support for migrations to the cloud.
Cloud infrastructure services offered to customers obviously require real components running in real data centers to implement these services. With19 data centers around the globe servicing 10s millions of users, Oracle is becoming one of the major users of infrastructure components. The clear strategy of offering customers the same stack on premises as is available from the cloud – in order for customers to implement their private cloud capabilities in the same way Oracle does its public cloud – means that these underlying components are of interest to us: we can use them ourselves.
The components that form the foundation for the Oracle IaaS set of services range from OpenStack and Oracle Linux and KSplice all the way down to the SPARC M7 processor with SQL and Security in Silicon. Somewhere in the middle are virtualization with Oracle Virtual Machine (OVM) and Virtual Box. Some observations across this stack.
Just prior to the OOW 15 conference, Oracle released Oracle OpenStack Release 2, based on OpenStack’s Kilo release. This is the first commercially available OpenStack implementation completely packaged as Docker instances, which eliminates the need to install components individually and streamlines installation, configuration and upgrades. You load the OpenStack images your local Docker Registry then run all modules you like.
Oracle OpenStack for Oracle Linux Release 2 can be deployed in private, public or hybrid clouds. It leverages Oracle Linux, Oracle VM and is pre-integrated with Oracle storage solutions. MySQL is used for the KeyStone core identity token and policy service. Oracle created the MySQL Cluster community edition for this: customers can now deploy MySQL in an Active/Active configuration, for enterprise database performance and reliability to OpenStack cloud deployments. At a later stage plans exist to also support the Oracle Database as the registry for OpenStack.
Oracle intends to release one version of OpenStack per year. That means it will skip every other OpenStack release and the next one to be expected from Oracle is not Liberty but Mitaka (see https://wiki.openstack.org/wiki/Release_Naming for details). See Ronald Bradford’s article for more on this OpenStack implementation from Oracle and the role of MySQL therein: http://ronaldbradford.com/blog/oracle-openstack-leveraging-mysql-cluster-and-docker-2015-11-11/ .
The evolution of Oracle Linux continues, with release 7.2 published in the second half of November. Before and during the Oracle OpenWorld conference, there was some talk of running Oracle Linux on SPARC in addition to X86 chip architectures. Even Larry Ellison mentioned that ‘we have Oracle Linux running on SPARC [, including on the new M7]’. This version of Oracle Linux is not (yet) publicly available. It is used for internal testing – which is very convenient for the Oracle Enterprise Linux development teams as it allows for example to test with 4000 threads. Earlier in 2015, John Fowler referred to Linux on SPARC as well: “[Fowler] said that Linux will be able to run on Sparc at some point, though he declined to give a specific timeframe.” (see citation in this article http://www.theinquirer.net/inquirer/news/2373412/oracle-says-sparc-m7-chip-will-put-an-end-to-heartbleed). It seems that if it wants to, Oracle could move quickly and release Linux on SPARC and a SuperCluster machine that runs Linux instead of Solaris. What exactly the plans are in this direction is not publicly clear.
Being able to hot patch a running Linux environment – without incurring downtime – is sort of the holy grail for any IT department, especially one running a major public cloud. Zero-downtime kernel updates have been available for several years using Oracle Ksplice, clearly very relevant to Oracle as well as its customers.
Ksplice can now patch user space libraries in addition to patching just the kernel. This is a groundbreaking addition to the already extensive capabilities of Ksplice, giving administrators the tools they need to cope with security threats and other issues without impacting running systems. First is support for the Open SSL library and the glibc (the GNU C Library or the standard C and C++ library used by applications on Linux). More libraries will be supported as time progresses.
Patching userspace libraries is much harder than patching the kernel. Each running process can have its own copy of the library. Note that Ksplice will never go so far as to actually update a kernel to go to a new major version – it is for patching only. Some longer term ideas with Ksplice is to also patch running applications such as MySQL and JVMs. Another step will be patching the hypervisor like Xen with bug fixes and security updates.
Virtualization with Oracle Virtual Machine and VirtualBox
Much of the Oracle Public Cloud runs on top of OVM: Oracle VM is the only fully certified platform for all Oracle software, running on X86 as well as SPARC. Deployed and tested in real world enterprise datacenters, Oracle VM is proven to reduce operations and support costs while simultaneously increasing IT efficiency and agility. Note that with OVM, Oracle software licenses can be calculated at the VM level instead of based on the underlying physical hardware, as is the case with other hypervisors.
OVM 3.4 will be released later this year. Performance improvements are the major aspect of this release, for example for the discovery of servers. An important investment area for OVM is advancement in software defined network (SDN).
Virtual machines can be can be exported as OVA file in this release. This file can then be imported into another VM environment, for example into VirtualBox. Vice versa, OVM can create a VM directly from an imported virtual appliance, without the extra step to clone to a template. Note that all administration for OVM can be done from Enterprise Manager 12c.
VirtualBox is the lower end little brother of OVM, for lean and agile virtualization from the desktop and laptop into the data center. VirtualBox plays a key role at Oracle OpenWorld: it is the main vehicle for the hands on lab environments – as well as many demo setups for the exhibition hall and the conference sessions. VirtualBox is also the current mechanism used for distributing try-out development environments on OTN and in Oracle Beta test programs.
In July 2015 Oracle released version 5.0 VirtualBox, with several noteworthy features:
· support for disk encryption; with personal key-based encryption. Later on support for master key will be added
· some advanced new Intel chip features have been extended into VirtualBox such as fast mem copy
· also support for the latest Mac OS and Windows releases (El Capitan and Windows 10)
In the future, support for 3-D will be added VirtualBox because Mac OS and Windows as guests need it. Another future option for VirtualBox could be in integration of Docker which would allow users to run the image with VirtualBox without having to explicitly create a Linux virtual machine. A Docker enabled Linux virtual machine could be transparently run by VirtualBox itself. Interesting tidbit: Oracle talks to Mitchell Hashimoto of Vagrant to see how VirtualBox could be further evolved.
Docker definitely has made quite an entrance into the world of Oracle. Important steps were taken in the direction of Docker in early Summer 2015: the announcement of Docker integration into Solaris Zones and the fact that Oracle joined the Open Container Initiative (OCI). The latter is an industry wide collaboration to develop a standard for container definitions – as open source, based on the Docker container definition – to ensure portability and prevent emergence of competing container definitions. Again, Oracle joined OCI a bit later – lagging by just a few weeks – than the initial launch of the initiative. Many sessions at OOW 2015, including keynotes, mentioned Docker in this way or that. WebLogic is supported to run on Docker, Oracle’s OpenStack and Oracle Linux are provided as Docker images, a Docker Cloud service is announced or at least suggested, VirtualBox is evolved with Docker in mind.
Here is the slide used by Thomas Kurian to announce the Docker Container Cloud Service (which by the way also seems to provide the foundation for the Application Container Service):
It is clear to Oracle that the market wants Docker – for DevOps with isolation, dynamic scalability, portability and agile operations. Docker is clearly not the answer to everything – it does not do virtualization, is has limited sizing, it lacks in networking capabilities and it is relatively immature. But it does help streamline operations and it is involved in a happy flirt with the microservices movement.
At this moment, there is no official Oracle Docker image for any but its open source products (such as MySQL and Oracle Linux). For some products, such as WebLogic, Oracle provides official build-files, allowing us to construct the Docker images ourselves (legal restrictions around licenses unfortunately currently prevent Oracle from releasing images with the software pre-installed). The Oracle Database apparently can run in a Docker container, but some limitations apply. Word had it at the conference that many more Docker files would become available, for creating images with Oracle software running inside. In some of the Beta programs, Docker images are used for distributing the latest software, rather than full blown virtual machines.
The piece of hardware that probably got the most airtime and caused the most excitement during OOW 15 was probably also the smallest: the M7 processor. One of the reasons for Oracle to acquire Sun Microsystems was so it could create highly tuned system stacks, from the processor up through the operating system, databases, and middleware, and all the way out to the applications. Engineered to work together at all levels. With the current family of engineered systems based on standard X86 hardware, this is currently not entirely the case. The M7 has the potential to change that.
The M7 has 32 [S4] cores that each can run 8 threads. Each S4 core has 16 KB of L1 data cache and 16 KB of L1 instruction cache. The M7 chip organizes the cores into clusters of four cores. Each set of four cores has a 256 KB L2 instruction cache, and each pair of S4 cores block also has a 256 KB writeback data cache. Both L2 caches provide up to 500 GB/sec of bandwidth each. The 64 MB L3 cache is partitioned and fully shared. Compared to the previous generation SPARC processor, the M7 delivers a trifold increase in overall IO bandwidth.
The true breakthrough in the M7 are its capabilities embedded in silicon, including security, SQL, and capacity features. These are the outcomes of a multiyear development project and a huge investment by Oracle, following its acquisition of Sun Microsystems. It demonstrates that Oracle has both the economic capability plus the intellectual property capability to undertake this kind of project. It is presented by Oracle as the next big thing in processor evolution: 20 years ago the 64-bit processors, ten years ago multicore and multithreaded processors and now chips with software functions embedded on them.
Security in silicon delivers two key features: hardware-assisted encryption and silicon-secured memory. Hardware-assisted encryption uses crypto accelerators to deliver fast end-to-end encryption of popular security ciphers (such as AES and SHA). Silicon-secured memory protects against attackers accessing data in memory. This memory protection is always on, and it has a near-zero impact on performance. It would have protected against both Heartbleed and Venom, two of the best known and most vicious recent security threats.
M7 SQL in-silicon features include in-memory analytics acceleration by using data analytics accelerator (DAX) engines and make the new SPARC M7-based systems superior platforms (10 times or more performance improvement) for running Oracle Database 12c In-Memory. Two portions of very low level SQL processing—the part that scans for particular strings across a large amount of memory and the part that helps you filter and join rows—are encapsulated in co-processors in silicon.
SPARC M7 capacity in silicon dramatically reduces memory utilization. Capacity in silicon includes inline decompression that enables aggressive data compression and high-performance data access. Compression allows a fair sized database to reside in memory at one tenth of its original size, and in-silicon decompression allows this database to be operated on at full performance. This lets users have either less expensive systems or tackle larger problems.
Through the Oracle Software in Silicon Developer Cloud developers can access a SPARC M7 system virtual machine to test their existing code, sample code available in the cloud service, or new application code.
The SPARC M7 system family includes six new systems, available for order today (SPARC T7-1, T7-2, and T7-4 , SPARC M7-8 and -16 systems and the Oracle SuperCluster M7). The Oracle SuperCluster M7 is the flagship SPARC M7-based engineered system. What Oracle’s plans are with the SuperCluster M7 with regards to the Exa-systems or in terms of powering its own Public Cloud has not quite become clear at the OOW 15 conference.
An interesting announcement was made on Project Apollo, a close collaboration between Oracle and Intel – surely one of the companies to suffer from inroads made by the M7 chip. This project – started in the Spring of 2015 – aims at achieving enterprise-grade reliability, availability, and security through Oracle’s Public Cloud and more plainly to optimize Oracle hardware running Intel chips and memory.
Engineers from both companies will be working closely, including a dedicated lab and all required hardware, software, infrastructure, setup, and workloads to create the optimized and tuned solutions that bridge the gap between single system and massive scale out, tuning and optimizing real-world applications using Oracle Cloud workloads. This will help overcome the challenges of migrating enterprise customers to cloud, making the transition faster, easier, and cost-effective. Intel and Oracle are to jointly take on IBM and its cloud offerings. One element is a new program called: Exa Your Power that helps companies migrate from IBM hardware to Oracle engineered systems on Intel X86 chips.
This year’s conference did not cause a lot of buzz around the Exa-family of engineerd systems per se. Exadata as a Cloud Service was announced – and Exalogic plays a key part in the Private Cloud Machine for PaaS & IaaS – see below. No new members of the family were introduced – and no spectacular new features announced for the existing machines: Exdata, Exalogic and Exalytics, nor for their close relatives Zero Data Loss Recovery Appliance (ZDLRA), Big Data Appliance and Database Appliance (ODA).
An important development in Exadata systems – that got a mention in a Larry Ellison keynote – is the rise of PCIe cards attached – with very low latency – to the server (so there is no need for any interaction with flash array). Exadata Smart Flash Cache (SFC) is read/write and holds frequently accessed data in very fast flash storage. The Exadata SFC is smart because it knows when to avoid trying to cache data that will never be reused or will not fit in the cache. In-Memory column format automatically used in Exadata Smart Flash Cache. This multiplies the effective columnar capacity by 10X to 100X – which in turn can result in huge performance gains.
In November, Oracle announced Exadata 5-8, its newest addition in the X5 line of engineered systems. Each Oracle Exadata X5-8 system offers up to 576 CPU cores, more than 1.3 PB of disk storage or 180 TB of ultra fast PCIe flash, and up to 24 TB of memory. X5-8 is designed for large-scale private cloud database initiatives, enabling large numbers of databases with varied workloads to be consolidated onto a single Exadata system, resulting in reduced operational and management costs- typically through the multitenant architecture. Oracle Exadata X5-8 ships with the latest Oracle Exadata software release (188.8.131.52.0) with some new capabilities such as IPv6 support, improved ExaCLI, improved Exadata storage statistics in AWR reports, and reverse offload improvements.
Some numbers on these systems, as of October 2015: in total, Oracle has shipped over 15K engineered systems to customers, about one third of which in the last year; almost half of these are Exadata units.
Private Cloud Appliance (PCA)
The Virtual Compute Appliance (VCA) has been renamed Private Cloud Appliance (PCA). PCA is a generic appliance it does not have any special secret sauce for running Oracle software, unlike the engineered systems.
This appliance will be updated with support for software defined networking, with for example the option to define private network across Linux Containers, Virtual Machines and Docker Containers. PCA comes with the Oracle Linux OpenStack Release 2 auto installed.
Whether customers are running Microsoft Windows, Linux or Oracle Solaris applications, Private Cloud Appliance supports a large range of mixed workloads hosted in a converged server, network, and storage environment. High-performance, low-latency Oracle Fabric Interconnect with Oracle SDN — two products in the Oracle Virtual Networking family— allow automated configuration of the server and storage networks. The embedded Private Cloud Appliance controller software automates the installation, configuration, and management of all the infrastructure components at the push of a button. Customers need to enter only basic configuration parameters and create virtual machines (VMs) manually or by leveraging Oracle VM Templates to get a full application up and running in few hours. With Oracle Enterprise Manager 12c, the Private Cloud Appliance is transformed into a Cloud Services delivery platform and provides a simple path from on-premises to Oracle Cloud.
Oracle Private Cloud Machine for PaaS & IaaS
Somewhat confusingly named, this Private Cloud Machine for PaaS & IaaS is far more than the PCA. This machine offers the Oracle Public Cloud in-a-box-on-premises. This allows organizations that are reluctant to move their data into the cloud to get all the benefits of a public cloud service. No investment, quick ramp up and dynamic scalability. Management is done by the cloud service provider – no staff needs to be hired and trained. This appliance is paid for based on a subscription. To all intents and purposes is the same as the Oracle Public Cloud services it’s only running behind the enterprise firewall, managed by Oracle staff. Software is running on the appliance is exactly the same as the Oracle public cloud. The update cadence of the appliance will be the same as the Oracle public cloud – for patches as well as for new functionality.
Oracle states: “Oracle Private Cloud Machine for PaaS and IaaS will be an on-premises Oracle Cloud Platform running the exact same software and hardware as Oracle Cloud and will offer 100 percent compatibility with Oracle Cloud.” The hardware will be an Exalogic system, running Exalogic Elastic Cloud Software 12c (EECS). For Exalogic, EECS 12c is a significant new release that delivers Oracle Cloud on Premise. This capability will offer customers the flexibility to run Oracle Cloud in their own datacenter and get the same cloud services and user-experience as in Oracle public cloud.
EECS 12c will introduce platform-as-a-service (PaaS) cloud services with support for an initial set of PaaS services such as Java Cloud Service (JCS) and Integration Cloud Service (ICS) with other PaaS services (Database Cloud, Application Container Cloud, Messaging Cloud and others) to follow soon. There will also be support for a robust infrastructure-as-a-service (IaaS) cloud service with Enterprise Manager Cloud Control (EMC2) providing singe pane of management and monitoring for IaaS, PaaS, Applications, Oracle Exalogic hardware.
This idea is golden. In theory, it takes everything that can make Oracle stand apart from other vendors and brings it together in a compelling proposition. This has the potential to be a game changer. No other cloud vendor is capable of offering this private cloud capability – because no other vendor has the stack or at least the same stack available in the cloud and on premises. The question remains: when will Oracle be able to actually deliver this service, what is the entrance level (you probably cannot get the public cloud on premise for a two OCPU JCS instance) and will Oracle be able to pull it off in terms of management? Doing operations for its Public Cloud is tremendous challenge. Ramping up the cloud infrastructure and rolling out all services to its own 19 data centers seems an undertaking that Oracle is struggling with. Extending this effort to private data centers around the world, hooking them up in terms of network (to both Oracle’s public cloud infrastructure for remote management and the private on premises environment for integration with pure on premises systems) and staffing the operations where automated operations are not yet fully available seems not realistic, at least in the short term.
Two other offerings from Oracle seem at the very least related to this new option of Public Cloud on Premises. These are:
· Oracle Infrastructure as a Service (IaaS) Private Cloud
· Oracle Managed Cloud Services
As I understand it, the first one of these delivers Oracle Engineered Systems hardware (Oracle Exadata, Exalogic, Exalytics, Big Data Appliance, Oracle Zero Data Loss Recovery Appliance, and Oracle SuperCluster) including support, deployed in the customer’s data center for a monthly fee, with no upfront capital expenditures. Oracle IaaS Private Cloud combines the security and control of on-premises systems with features of cloud computing, including Capacity on Demand, which enables businesses to access and pay for peak CPU capacity only when needed. Note: the minimum term commitment for Oracle IaaS Private Cloud is three years. Also note that software licenses are not included in this service: the service is about on premises engineered system capacity in a pay per use model.
Oracle Managed Cloud Service (OMCS) hosts the customer’s software, typically in the Oracle Data Center – but sometimes at the customer’s or a partner’s premises. OMCS owns and manages both software and hardware, using single tenant dedicated machines. The software managed by OMCS ranges from business applications (including E-Business Suite, PeopleSoft, Siebel, JD Edwards, Fusion Applications and dozens more) to technology products (Database, Fusion Middleware, Identity Management, Engineered Systems) and services that span the entire software lifecycle (from migration, testing, and deployment to compliance and disaster recovery). OMCS predates the move to the Oracle Public Cloud –previously called Oracle On Demand, with most emphasis on hosted business applications. It would be interesting to know whether OMCS has already fully switched over to the Oracle Public Cloud infrastructure and the same services that are publicly available. That would be a good example of eating the pudding – or one’s own dogfood as the more common expression at Oracle goes.