Violin Memory Blog

Hear what the experts at Violin Memory have to say about a variety of topics on all things Flash.

Postcard from Oracle OpenWorld 2014: The Oracle FS1 Flash Array

by on October 13, 2014


A couple of weeks ago, along with thousands of other people, I attended Oracle OpenWorld 2014 in San Francisco. There were lots of announcements and lots of opportunities to learn (especially at the excellent OakTable World conference-within-a-conference). My personal favourite session was Jeremiah Wilton’s talk on running Oracle on Amazon Cloud Services – I think a whole essay could be written on Oracle’s Cloud Licensing Policy. And as Steve Karam has already pointed out, the overwhelming message of OOW14 was cloud, cloud cloud. It seems that Larry no longer likes to talk about on-premise.

But I don’t want to talk about cloud. My perspective of OOW comes from that of a storage vendor – and for us vendors there was an interesting announcement tucked away beneath all the cloud hyperbole.

The Oracle FS1 Storage Array

Just to be clear, I’m not here to deliver a technical analysis of the FS1. As a database guy I have many colleagues and contacts in the storage industry who are much better qualified to discuss this product (especially those who are ex-Pillar Data Systems, because the FS1 is after all simply the next-generation Pillar Axiom with SSDs).

But as with my tracking of the History of Exadata, I like to watch Oracle’s marketing strategy around its products – I often find you can read more between the lines than you can get simply from buying into the marketing hype. (And let’s face it, Oracle does like a bit of marketing hype.)

Surprise Announcements

The release of the FS1 caught most people by surprise as it was not advertised prior to OpenWorld commencing. After Larry Ellison mentioned it during his keynote, a new session was announced with the catchy name “Introducing the Oracle FS1 Series: Taking Flash Storage into the Mainstream” delivered in almost evangelical style by none other than Mike Workman (previously the CEO of Pillar and now SVP of Oracle Flash Storage Systems). And I was in the room, eagerly awaiting the news.

Now based on my experience of working for vendors (including Oracle) the first thing I think when I hear of surprise announcements is that they are rushing something forward for the purposes of marketing noise – something that perhaps isn’t quite ready. Could that be the case here?

In my opinion and based on this session, emphatically yes. The session introducing the FS1 was incredibly light on details, but – adopting an age-old Oracle tactic – packed full of attacks on competitors. And nobody got attacked more than EMC’s XtremIO all flash array. Here’s one sample slide:

Can you see the small print at the bottom of this slide? I couldn't - and I was sitting four rows from the front.

Can you see the small print at the bottom of this slide? I couldn’t – and I was sitting four rows from the front.

Ironically, this was the first year I can remember where EMC didn’t have a stand in the exhibition hall (in 2013 their stand was one of the biggest at the show). I noticed that the guy sat next to me was an EMC employee so after the session ended I asked him what he thought. “Thanks Oracle for all the free marketing!”, was the reply.

Only a few high-level details came out of the session. For example, we learnt that the FS1 took three years to develop and can have up to four tiers of storage:

  • Performance Flash (actually 400GB SSDs)
  • Capacity Flash (again via 1.6TB SSDs)
  • Performance HDD (300GB or 900GB disk drives)
  • Capacity Disk (4TB disk drives)

That’s a lot of disk. Keep that in mind for later.

Marketing Claims

I quite enjoyed the FS1 introduction session, although I suspect not necessarily in the way I was supposed to. It certainly made me smile when Mr Workman made this statement:

“The Oracle FS1 is the first mainstream, general purpose flash array”

Mainstream? General purpose? Here at Violin Memory we have numerous customers running multiple, mixed workloads on our award-winning 6000 series all flash arrays:

And so on… I even know of a customer who uses a Violin array as a file server! Sounds pretty general purpose to me. And to be fair, I’d be surprised if some of the other all flash array vendors weren’t puzzled by Oracle’s claim too.

But a more confusing comment from Mr Workman was this:

“It’s NOT a hybrid array”

Remember that bit about four tiers of storage, two SSD and two HDD? In what way is that not a hybrid array? Does Oracle consider the phrase “hybrid array” beneath it? I notice that Oracle’s FS1 home page and the associated press release both steer clear of using the H-word. Sadly the media hasn’t received that memo, so the headlines still call it a hybrid - as will the rest of us, I’m sure.


As the FS1 product nears the point where it’s ready for launch (rather than pre-launch) I’m sure we’ll find out more about it. Maybe some of the storage players will comment – although so far all I’ve seen is varying degrees of apathy. Who knows, maybe it will revolutionise the world of storage. But until then – and while all we have is “9x faster than Y” claims – it feels like another triumph of hype over substance.

No offence Oracle, but until you come up with something more concrete I’m just going to think of it at the BS1 Flash Storage Array…

WFA: High Performance Storage for Hyper-V Clusters on Scale-Out File Servers (SOFS)

by on October 8, 2014


High-performance storage is a prerequisite for any enterprise-class virtualization initiative. After all, consolidating workloads onto fast servers and then slowing them down with spinning rust storage is a pointless, not to mention expensive, endeavor. Given the importance that virtualization plays in the modern data center, it follows that storage not only needs to be high performance, but also easy to manage and scale. Recently, Microsoft undertook an experiment to demonstrate exactly this. Deploying a SOFS across two Violin Windows Flash Array-64s, Microsoft illustrated the simplicity of deploying scalable high-performance storage that delivers remarkable IOPS performance, bandwidth, and extremely-low latency.

Summary of Experimental Performance Results

But don’t take my word for it; read what Microsoft has to say. If you’d like to learn a bit more, I invite you to download this white paper and reference architecture:


This white paper, Building High Performance Storage for Hyper-V Cluster on Scale-Out File Servers using Violin Windows Flash Arrays, demonstrates the capabilities and performance for the Violin Windows Flash Array, a next generation All Flash Array storage platform. With the joint efforts of Microsoft and Violin Memory, the WFA provides built-in high performance, availability and scalability through tight integration of Violin’s All Flash Array and Microsoft Windows Server 2012 R2 Scale-Out File Server Cluster.

The results presented in this white paper show the high throughput and low latency that can be achieved using Microsoft technologies bundled with Violin hardware. With two Violin WFA-64 arrays, the workloads running in Hyper-V VMs can linearly scale to over two million or 1.6 million IOPS for random reads or writes, 8.6 GB/s or 6.2GB/s bandwidth for sequential reads or writes. Even at the maximum throughput of 2 million IOPS, the 99th percentile latency can still be capped at 4.5ms and the latency of simulated OLTP IO traffic at a load of 1.15 million IOPS is capped at 3.7-4ms as well.

Learn how Violin and Microsoft collaboratively optimized this solution to leverage WFA’s unique performance profile so you can run your Hyper-V workloads in a Flash.

For more information on the WFA and Hyper-V, go to

Vote for Us! Violin Memory Selected as Finalist for Two SVC Awards

by on October 6, 2014

2014 SVC Awards Finalist logo

Violin is nominated for two awards: SSD/Flash Storage Product of the Year and Storage Company of the Year.

The Storage, Virtualization, Cloud (SVC) Awards reward the products, projects and services, as well as honor companies and teams operating in the cloud, virtualization and storage sectors. The awards are jointly sponsored by DataCentre Solutions, Virtualization World, and Storage Networking Solutions.

Vote for Violin Memory!

The Problem with “Always On” Deduplication

by on October 1, 2014

What is “Always On” Deduplication?

Deduplication is the process of removing redundant blocks of data. Several all-flash array vendors provide deduplication that can’t be turned off. That’s the “always on” approach. You might ask, what problem is “always on” trying to solve? That’s harder to explain. It’s solving an architectural problem for all-flash arrays.

The growing use of flash storage to provide high performance in a world that has grown increasingly unaccepting of mechanical disk drive latency has some challenges of its own. One of the challenges is flash cell wear-out. It is a characteristic of flash that every all-flash array vendor cannot avoid. A flash NAND cell is capable of a fixed number of writes before it significantly degrades in performance. At some point the cell becomes useless. To make flash work as intended, it’s necessary to design an architecture that will get the longest life from a cell without the burden of too much cost or performance degradation. Thus, every flash controller has a function to manage the life of a flash cell.

Deduplication came of age in the 1990s and was originally for backups. It allowed a smaller amount of data to be written to tape for backup to reduce the amount of storage needed for backup and it had a performance benefit by reducing the impact on a backup window since writing less data takes less time. This technology was then looked at in a new context as a way of managing flash wear by reducing the amount of write traffic. By always deduplicating data before writing, it follows that you would get longer flash life as you reduce the number of writes. Thus, the “always on” option, and it is commonly done to manage flash resiliency, not because the applications necessarily benefit from deduplication.

The “Always On” Deduplication Approach

blocksAll-flash array vendors who use solid state disks (SSDs) depend on their SSD vendor to provide flash management in the controller on the SSD. There is also controller software in the array where deduplication can be used to reduce the number of writes, hence extending the life of flash storage in the array. There is collateral impact from using deduplication to manage flash resilience or life. Some applications make very good use of deduplication such as virtual desktop infrastructure (VDI) which can reduce the amount of storage required by up to 90%, and therefore reduce the amount of wear on the flash. Virtual server infrastructure (VSI) can also benefit greatly from deduplication, saving up to 65% of the amount of storage and also saving on flash wear. However, not all applications are a good fit for deduplication.

The Problem with the “Always On” Deduplication Approach

Databases are an example of an application that shouldn’t be used with deduplication. There is some small space benefit for deduplication for databases, although not nearly as much benefit as with VDI or VSI. The bigger problem is the way in which database systems store data. Relational databases use tables to improve performance and manage operations. A relational database such as Oracle has no duplicate data blocks, because each block in a tablespace (the logical container in which tables and indexes are stored) contains a unique key at the start and a checksum containing part of that key at the end. As a result, most shops are going to see little space saving, while paying the price of increased latency as the hardware pointlessly attempts to find matching blocks.

It’s possible that some deduplication can be achieved by customers storing copies of their databases on the same array (but this is a better use case for space-efficient snapshots), by the deduplication of unallocated space (but this is really a use case for thin provisioning) or by the accidental deduplication of critical entities (Oracle deliberately stores multiple copies of its key files such as redo logs, control files, etc.). But the reality is that deduplication has no place in a tier-one database.

Another workload that doesn’t usually make sense for deduplication is encrypted data. Encryption is, by design, a unique data stream where deduplication only adds latency. Think of credit card numbers as a common workload to be encrypted, and you’ll understand it also has low affinity for deduplication. If there is a need to deduplicate encrypted data you must have access to the unencrypted data so that the storage system can identify duplicates. This implies data encryption can’t be performed within the application if you want to deduplicate that data. Any storage processing of encrypted data needs to be architected very carefully to preserve the security of the data.

The Violin Solution: A Different Approach

Violin takes a different approach since we control the flash architecture, and don’t need to rely on deduplication to manage flash resiliency. Violin’s Flash Fabric Architecture™ (FFA) takes a fundamentally different approach to flash resilience. Because we work directly with the flash die, we can manage resilience at the array level. This allows us to not only avoid the hot spots that create early SSD burn out, we also get improved performance due to the parallelism built into the architecture. When writes and flash management, (including garbage collection) are managed at the array level, the life of the flash is extended, and the latency spikes as SSDs going through garbage collection are avoided.


When you place a workload that could benefit from deduplication like VDI or VSI (about 14% of most data center workloads) Violin provides granular control so you can take advantage of the reduced writes. When you have a workload that will have a small or no benefit from deduplication like databases or encrypted data, you can turn deduplication off. You decide if you want to use deduplication on a given workload. Violin does not implement “always on” deduplication, because it’s not always a good idea from an application point of view. Violin doesn’t need “always on” deduplication because our FFA provides a better way to preserve flash resilience. In fact, we provide a deduplication dashboard so you can see the effective deduplication rate so you can decide what is working and what’s not working. If deduplication is good for a workload, you might go looking for similar workloads to deduplicate. If it’s not working, throw it out and make room for something that will benefit from the technology.

Violin provides you the tools to get the most from deduplication technology. You turn it on. You turn it off. You turn it on when there is a benefit. You can turn it off when there is no benefit.

Why would you do it any other way?

To see what IDC thinks about inline deduplication, see their white paper on the topic here:

IDC White Paper: Why Inline Data Reduction Is Required for Enterprise Flash Arrays

To see the Violin deduplication solution check out the Concerto 2200 here:

Larry Ellison, Harbinger of Change

by on September 24, 2014

This year you could say that #transformation is the best hashtag for Oracle OpenWorld. News of Larry Ellison’s stepping down as CEO of Oracle signals a change of season.

Oracle-relational-databaseThere is a fundamental shift in the data center today, just like in the 1970s, when “Ellison saw IBM’s relational-database research and realized he could make a business out of it before IBM knew what a good thing it had.” Ellison was a harbinger of change then – and a harbinger of change now.

At Violin Memory, we see major leaps in the structure favored by today’s enterprise data center. And just like Larry Ellison, we try to recognize fundamental shifts in data architecture and capitalize on our insight into data center trends.

database-cloudWhat was the harbinger for change this time?  Was it the conquest of the Cloud?  Or can we pin it on Software as a Service?  If we are experiencing a shift in the data center, then definitely data center consolidation driven by faster I/O point toward an evolution in database execution.

Although change is in the air, Oracle remains the grand daddy – and critical backbone – of many a serious datacenter. Software types who have been at this game for a while are keeping their glass full as they toast to Oracle this week. At the same time, those in the know are keeping their eyes open to options that might enable a positive change in their use of Oracle.

Do you need Oracle to run 20,000 IOPS in under 0.5ms?


Chat with Violin Memory to find out how we enable positive change for every business customer. Stop by our booth to hear for yourself how we transform business lines, consolidate Oracle database infrastructures and lower costs in the data center. We’re not just talking about a fundamental change in how data is stored and served – we are living it!  Are you?

Capitalize on your ability to spot fundamental changes in the data center. Get the full scoop from Violin Memory booth #307 at Oracle OpenWorld where we feature these cutting-edge presentations:

Eric Herzog, CMO and SVP of Business Development, will share how Violin Memory revolutionizes the economics of a datacenter.

Gilbert Standen, Consulting Engineer, will share how you can build Oracle on all flash arrays.

Matt Henderson, Director of Business Development Solutions, will share how to build a modern infrastructure for your monolithic database.

Nathan Fuzi, Consulting Engineer, will share how to analyze your own database performance.  Nitin will also share his recent publication, Database Cloud Storage: The Essential Guide to Oracle Automatic Storage Management, and be available to sign books after his presentation.

Beyond the transformative changes that Violin Memory will present, there are plenty of other cutting-edge players at Oracle OpenWorld. See five bands over three epic nights.

This year at Oracle OpenWorld we listen to the sounds of change. We think about Larry Ellison and recognize that this is not Ellison’s swan song. Not a chance. For those of us who admire this entrepreneur and grand daddy of relational databases, we recognize that Larry Ellison is not leaving the race, he’s just changing his heading. Larry is sailing into a rosy sunset. Whatever direction Larry takes, we wish him fair winds and following seas.


See you at the Violin Memory booth at Oracle OpenWorld.


With thanks to Tiernan Ray at Barron’s for source material:

With thanks to Oracle for image of relational database:

More Posts ...

Featured Posts