One of the most important things you can do as a marketing leader is to develop and maintain a current and detailed understanding of the competitive landscape. Your knowledge of the marketplace and your company’s relative position in it is critical to the success of your positioning and go-to-market strategy, as it empowers you with information and insights critical to making smart business decisions that will keep you competitive.
But the competitive landscape is in a constant state of flux, especially in fast-moving technology sectors like big data and analytics. New start-ups pop up every day. Established players add, remove, sell, acquire, update and enhance their products and solutions continually. Keeping up is no easy task!
There are a number of ways to keep in touch with what’s happening in your marketplace. Many of these ways include online or printed resources, for example, analyst reports, blogs, thought-leading business journals, industry specific publications and social media groups. However, it would be unwise to rely entirely on these online resources at the expense of information collected in webinars, at conferences, or through discussion with your customers, field staff, industry peers and colleagues.
All of these tactics will keep you current on the gist of the industry, but it’s important that you also perform periodic deep dive competitive analyses. A fresh and focused analysis will:
- Validate your understanding
- Dispel apocryphal claims
- Crystallize your value proposition
- Spot opportunities
- Uncover best practices
- Reality check your pricing policy
- Re-assess your partnerships
- Shape product and corporate strategy
Bootstrap Marketing is skilled in performing market research and using this as input to an actionable marketing plan. If you need help with a competitive analysis, please reach out to us. Bootstrap Marketing is here to help!
Performance and interoperability testing is another important facet of research and product validation. Through our involvement with big data players from start-ups to industry leaders like SAP, it has become apparent to us that one of the things the big data industry needs is an independent facility to test and benchmark big data applications and validate vendors’ claims about performance and interoperability.
We are therefore pleased to announce our partnership with Cloudwick Labs, a community powered big data use case, benchmark and best practice research center-of-excellence for enterprise big data located in Newark, CA. Cloudwick’s big data expertise is derived from more than 60,000 hours of big data development projects gained with companies such as Bank of America, Visa, JP Morgan and Wal-Mart. To this expertise, they have added a state-of-the-art lab with support from leading vendors including DataStax, Hortonworks, Cloudera, Mellanox, Fusion-io, Extreme Networks and SUSE.
At Bootstrap, we believe that initiatives like Cloudwick Labs together with our own big data market and TCO research can only strengthen and help accelerate the adoption of big data solutions.
From our earliest days, one of Bootstrap’s key value-adds to our customers has been our ability to conduct rapid, accurate and relevant research to validate their value-proposition with end-customers, and to inform their decision making and go-to-market planning. To meet the increasing demand for these services, we have established a new division – Bootstrap Research - that provides research solutions in four areas:
- Expert Validation – the “Bootstrap Brains Trust” is a panel of big data industry experts who can provide either direct feedback or connect us to other people who can provide expert validation of customer ideas and messaging.
- Online Surveys – using short, sharp online surveys, Bootstrap can bring the power of numbers to validate your thinking around market, customer and product issues.
- Competitive Research – knowing your competition is a critical part of any go-to-market plan, but it’s just not that easy for vendors to do. Bootstrap can help with focused, incisive research that differentiates you from the competition.
- Total Cost of Ownership Modeling – TCO models are essential to help companies establish their pricing model and to decide how to present pricing to customers. Bootstrap can help by creating comparative TCO models that give you the information you need to make these decisions.
If you are serious about building a business, you need the numbers to back it up. Let Bootstrap Research’s pragmatic, customer-focused approach get you the information you need to make better, more informed decisions.
We were unable to attend the recent Strata event in New York, but we asked big data guru Stephanie Vargo-Walker to keep an eye on some of the most interesting developments announced at the event. Here’s her first guest blog on Apache Spark.
Apache Spark is one of the software projects coming out of UC Berkeley’s AMPLab and is a component of the Berkeley Data Analytics Stack (BDAS). Apache Spark is an in-memory, data analytics cluster computing framework that provides primitives that allow user programs to load data into a cluster’s memory and query it repeatedly while simplifying development by supporting Python, Java and Scala interfaces.
Spark was initially developed for two applications where placing data in memory helps: iterative algorithms, which are common in machine learning and interactive data mining. In both cases, Spark can run up to 100x faster than Hadoop MapReduce. Spark can also be used for general data where performance and time to results are critical. Spark can access any data source supported by the Hadoop Distributed File System (HDFS), which makes it easy to run over existing data.
While Spark was initially created in the UC Berkeley AMPLab, it is now an Apache Incubator open source project and is being used and developed at a wide array of companies from IBM and Intel to startups like Conviva and Quantifind.
At the recent Strata Conference in NYC, Cloudera officially announced support for Spark as part of the Cloudera’s CDH distribution. This combined offering gives developers the best of both worlds – batch processing on Hadoop and real-time, in-memory processing on Spark.
Spark is being well received by big data practitioners. At Strata, a half day tutorial session on Spark was packed with people wanting to learn more and San Francisco will play host to the first ever Spark Summit community event on December 2nd and 3rd.
Where speed to decision creates competitive advantage, it might be time to consider putting a little Spark into your Big Data platform.
Embarrassingly, it had been several years since we had reviewed our email marketing leads list. The open rate for our monthly newsletter was about the average 10% give or take, but we knew that much of our list had become stale. Last month we bit the bullet, invested about a day of work and met with resounding success. Our open rate jumped to over 23%! The clean up was well worth the effort.
Here are a few tips for cleaning up your leads list:
- Throw away leads that haven’t opened the last 3-4 emails
- Buy some fresh leads with a demographic profile similar to a profile with which you’ve had past success
- Add your LinkedIn contacts
And, don’t wait years – a bi-yearly scrubbing is a good rule of thumb.
During a recent meeting of our big data “brains-trust”, we asked ourselves “is big data now synonymous with Hadoop?” Here’s what we concluded.
A year ago, our answer would have been a clear-cut no; twelve months on, we’re tempted to say that the answer is yes albeit we’d now have to say “Hadoop and NoSQL” not just Hadoop, especially if you look at Gen X-ers and Millennials many of whom are highly influential in big data start-ups and projects. Indeed, the emergence of NoSQL from a cult to the mainstream has been one of the most interesting stories of 2013. To date, NoSQL has attracted more VC than end-user investment, although we expect that to change quickly over the next year.
One very interesting people-based statistic backing up this claim is that if you look at big data job postings, NoSQL skills are in high demand. For example, in the 6 months to 21 October 2013, job postings for Hadoop developers (*) were also looking for NoSQL skills (44%), MongoDB (34%) and Cassandra (31%).
(*) Source: IT Jobs Watch, Paradigm Software (UK) Limited.
Don’t believe me? Look at My Runway, TwoGo, SAP Jam or what SAP is doing in its sports technology group to get a view on the new SAP.
But, at its core, SAP is an enterprise software and, now, cloud company. Of course, databases have always been at the core of enterprise software, so it was natural that SAP would need to come out with a Hadoop strategy before too long. And what SAP has announced is very interesting.
SAP has announced that it will partner with two Hadoop “distros” from Intel and Hortonworks. SAP will resell Hortonworks so that they can offer a 100% open source Hadoop solution (yawn) but, when it comes to their partnership with Intel, things get more interesting. As SAP’s new SVP and GM of Big Data, Irfan Khan, said “SAP is going to benefit from deep engineering integration between SAP HANA and Hadoop” leveraging years of work that Intel has done with SAP around the HANA database.
SAP + Hadoop = ? So what I think we’re seeing emerge at SAP is a hybrid data management strategy built on a model where traditional SAP apps data, SAP HANA in-memory data, and Hadoop data (FYI, NoSQL to follow) is unified by the SAP HANA Smart Data Access layer. This infrastructure will support cross-platform, big data-enabled apps such as SAP Demand Signal Management and an analytics layer running in-memory using SAP HANA.
One other thing that SAP threw in to their Hadoop announcements is that SAP will provide Level 1 and 2 support providing the single point of contact that has been sorely missing for Hadoop based applications.
SAP appears to be embracing a realistic view of the emerging enterprise data management environment and to be leveraging traditional strengths in partnering (Intel) and knowledge of the enterprise (support) to provide a differentiated and valuable Hadoop option for its customers.
Wasn’t it only on July 8th that Oracle announced a new partnership with Salesforce.com that ensured, according to Larry Ellison, that “the Salesforce CRM applications, the Oracle HCM or ERP applications, and those things have to just start sharing data and working together seamlessly – as if they were from one vendor.”
Now it seems this short-lived love affair is over. OpenWorld has seen Oracle announce SaaS versions of its apps products, database-as-a-service, and 10 additional cloud services many of which appear to target Salesforce.com. Some of these services are clearly aimed at Amazon Web Services (for example, the Compute Cloud and Object Storage Cloud), but it’s hard to misunderstand the targeting of services that even mimic the Salesforce.com XYZ_Cloud (e.g. Sales Cloud, Service Cloud, etc.) designation for example:
- Documents Cloud – file sharing and collaboration
- Mobile Cloud – tools and infrastructure for building and hosting secure mobile apps
- Cloud Marketplace – a site where partners can list apps that integrate with Oracle’s own cloud offerings
In addition, Oracle announced a Billing and Revenue Management cloud service for companies with subscription-based business models that will inevitably draw comparisons with Zuora.
So, are these me-too plays just designed to create FUD for the established cloud vendors? Not so says Oracle’s SVP Applications Development, Chris Leone, who claims that Oracle’s entrance into the market elevates the cloud from the tactical to the enterprise level by providing consistent SLAs and removing the need for integration of multiple cloud apps.
Of course, the notion that Salesforce isn’t an enterprise grade solution will come as a surprise to Salesforce.com’s 100,000+ customers and 2,000,000+ subscribers but then, as Oracle’s sudden cloud conversion shows, we live in a world of surprises.
And that true love thing? Don’t worry, by Dreamforce there’s a good chance it will be back on again.
I like sailing as much as the next man. In fact, I probably like sailing considerably more than the next man judging by the millions of Bay Area residents who have totally ignored (shunned?) the America’s Cup.
However, I do not like sailing as much as Larry Ellison. Not only does Larry like sailing enough to pump hundreds of millions of his, and Oracle’s, money into the America’s Cup, he also likes sailing enough to miss his own OpenWorld keynote.
Larry’s decision to watch his team fight to complete the greatest comeback in America’s Cup history rather than deliver his umpteenth OpenWorld keynote shouldn’t really surprise anyone.
Sure the Oracle OpenWorld attendees who actually spent their money and invested their time to hear Ellison talk have a case but, in my opinion, not so the commentators, bloggers and analysts who have taken the opportunity to jump on Ellison’s decision and get all high-and-mighty about it. For example, analyst Michael Krigsman of Asuret, voiced his opinion: “While Oracle asks customers to prioritize its products over competitors, Ellison made the decision that racing, his passion and hobby, is more important than customers.”
At least Constellation Research analyst Ray Wang, could see the irony in the situation and thought Larry had set the stage for a truly grand entrance onto the keynote stage “What I would have suggested was broadcast the race live, then have Larry helicopter in and do a live feed of him walking into Moscone, talking about Exadata and how big data helped change the game.”
And with Team Oracle winning the final race, Larry had the last laugh.