Timing is everything

January 4, 2012

There’s no penalty for jumping the gun

On your marks. Get Set. Go. When the starting gun goes off, there is always going to be a rush of adrenalin, a surge of excitement, and a striving to get up to speed and do your best.

But when the starting gun goes off in relation to a Gartner Magic Quadrant (MQ) assessment of your company, in many ways it is already too late.

Magic Quadrants generally appear once a year. For the companies who are on the receiving end, they can be make or break factors, with a huge influence on business prospects for the year ahead.

For the analysts involved, they are important pieces of work, but they have to be fitted in alongside research reports, client inquiries and meetings, events and presentations, custom engagements, webinars, blogs, and a host of other commitments. Leaving all the rest of an analyst’s annual workload aside, producing a Magic Quadrant means identifying and investigating multiple companies that will appear in the final diagram. On top of this, the analyst has to give due consideration to all the peripheral candidates that need to be evaluated before decisions can be taken about whether or not they should be included.

The wonder is not that so many MQ assessments leave so many vendors feeling disappointed, but that so many MQs win general acceptance as being pretty fair, diligent, and useful assessments of the state of play in particular markets.

Perspectives can’t be changed overnight

The key to making yourself visible and understood, as a small or medium-sized vendor, and getting the most positive assessment your performance would justify, is to make sure you are off and running well before the starting gun is fired.

That may sound paradoxical, but Magic Quadrants don’t come out of the blue. As mentioned earlier, they appear roughly yearly. And the best time to start preparing for a successful assessment is 12 months ahead, immediately after an MQ has been published.

Even then, there’s a lot to do in the time available.

If you believe the analyst specializing in your area ought to be adjusting his or her perspective and reframing the way your sector is viewed, you need to be making contact, briefing the analyst, and putting forward evidence to support that change of standpoint very early in the year. As the process gathers pace, your chances of influencing the analyst’s view of market issues and priorities rapidly decline.

Right from the start, you need to be reviewing the factual evidence that will demonstrate your capabilities, identifying the gaps in it, and starting to gather or commission the elements you will need to support your case.

That means lining up the right references and collecting relevant case studies and customer success stories. It may mean commissioning customer surveys or market survey work that will lend strength to your arguments and credibility to your strategy. It will certainly involve making sure you have clean, presentable internal data relating to sales, marketing, and financial solidity and performance.

Your bigger competitors know the score 

If you do not start preparing all this evidence early, you will simply run out of time. But where big corporations put money and resources into analyst relations teams to prepare their evidence and put the story across, smaller firms cannot take that approach. If you can’t slug it out toe-to-toe, you have to be smart about what evidence you are going to collect and how you are going to deploy it.

The material that’s required will need to be thought about, prepared and substantiated as far in advance as possible. You’ll need a story, a plan, and an evidence pack – and you’ll need them all sooner rather than later, so the sooner you start, the better placed you will be.

Once the actual assessment process begins, you will find yourself staring at a proforma list of maybe 50 questions, many of which will leave little scope for you to tell the story you want to tell. Within the formal process you may have the chance to do a product demonstration and possibly a briefing with the analyst. But the formal assessment period is short, and there certainly won’t be time to go gathering customer views or commissioning market surveys to back up your arguments. You need this ammunition in your locker before the clock starts ticking.

And there’s another reason why this is so essential. Magic Quadrants rate vendors in a relative way, not in absolute terms. They are essentially competitive. If you are not seen to be going forward, relative to your competitors, you may well find your dot moving backwards. It’s dot eat dot. That is why the bigger companies you’re up against employ the full-time analyst relations staff that you can’t afford.

But, believe me, the morning after the MQ is published, these people will be coming straight into the office, rolling their sleeves up, and starting work on preparation for the next one. If you’re going to compete successfully, you can’t afford to give them a head start.

Are we on target? The last thing we want is everyone agreeing with what goes into this blog. After all, if you don’t disagree with some of the points we’ve raised, we’ll be forced to be more and more provocative, and who knows where that will end? So let us have your thoughts. Have you seen the payoffs from preparing your evidence early? Or do you feel MQ assessments are a lottery, whatever you do to make your case? Shoot us down and have your say.


Hanging on the analyst’s every word

December 9, 2011

For the buyer, it’s about avoiding risk 

Why does anyone take notice of what analysts say and write?

If you ask the people who buy technology why they value the research reports they see  from Gartner, Forrester, IDC, and the many specialist firms that have flourished in the last few years, it’s because they help reduce risk.

Companies’ futures and people’s jobs depend on getting IT decisions right. It may not always be essential to buy the very best solution, but it’s vital to avoid placing your money on a horse that shouldn’t be in the race. If the research firms can just remove that element of risk, they’ve amply justified their cost.

For vendors – and less well known vendors, particularly – this risk factor is a matter of supreme importance.

It doesn’t have to be real. Perceived risk is what makes buyers decide to play safe and choose a big corporation and a name they know, even when specifications, performance figures, and price may all seem to point to a less familiar supplier.

But the problem is that it just takes the odd out-of-place word in an analyst’s write-up to ring those risk alarm bells.

How small is small?

Your company is no giant. You’re growing fast and building a loyal customer base, but you’re not Cisco or Autonomy or Dell. So the analyst casually refers to you as a small vendor in your field.  On the face of it, and by comparison with the biggest, that seems reasonable.

But that one word – “small” – could cast a shadow over an otherwise positive research report.

There may be praise for your technology, your market and distribution strategies, your clear and publicly declared development roadmap. But if cautious buyers, under pressure to reduce risk exposure, believe your small size introduces extra dangers, you will not be showing up on their shortlists.

They might fear it means you can’t afford to invest in product development. You might run out of cash and go bust, or be swallowed up by a bigger fish that would simply kill off your product.

Too much uncertainty, the thinking goes. Too much risk we don’t need to take on.  Let’s stick with Microsoft and keep an eye on these guys for the future.

The fact is, that’s your opinion 

At The Skills Connection, we estimate that merely looking risky could cost a growing technology company 25% of its potential sales opportunities, however brilliant its products may be.

The problem is the difference between fact and opinion – and the gray area where one shades off into the other.  When you see the draft version of an analyst’s assessment of your company for a Gartner Magic Quadrant or a Forrester Wave, for example, you will be offered the chance to comment on it and negotiate reasonable changes with the analyst.

Unfortunately, what you see as reasonable and what the analyst sees may be very different.  You can challenge facts, but the analyst will defend his or her opinions to the hilt. And you’ll do better to keep your powder dry and try to win the key battles about facts than pick a fight about opinions, which you will never be able to win.

Look for the evidence to back your point 

There’s a lot more to be said about this issue, and we’ll certainly be returning to it in future postings.

For the moment, though, it’s worth looking again at our earlier example of the use of the word “small”. Is that fact or opinion?

It could be a matter of fact, if everybody accepted a standard definition of smallness, based on, say, sales revenues or payroll numbers. But while the US Small Business Administration, for example, is happy to draw a sharp line, defining a business in the “computer programming, data processing and systems design” category as small if it has receipts of less than $25m, there’s no common understanding of what the term means.

Some people talk about firms as small if they have fewer than 500 employees. But what about the growing software company with only 150 employees and sales of less than $20m, but an impressive customer base of 400 companies?

The fact is, small is such a slippery, vague, subjective term that it amounts, effectively, to a statement of opinion. You should be trying to convince the analyst to change “small” to “specialist” or “pure play”, or at least to get the wording modified to say “small, innovative vendor with 400 customers”.

Think clearly. Think like a lawyer (though don’t get your lawyers involved – that’s a red rag to a bull). Demand consistency and look for evidence to show that you have not been treated objectively. If your company has been dubbed “small” when smaller firms in your industry have escaped without that label, that’s factually misleading. If you can point to that kind of inconsistency, put your case, be firm and you should be able to get the change you want.

  Are we on target? The last thing we want is everyone agreeing with what goes into this blog. After all, if you don’t disagree with some of the points we’ve raised, we’ll be forced to be more and more provocative, and who knows where that will end? So let us have your thoughts. Have you found negotiating fair wording of factual matters with analysts straightforward? Or did you feel you were banging your head against a brick wall? Shoot us down and have your say.


How level was my playing field?

November 21, 2011

Tilt! And you’re sidelined for another year

It’s no surprise. For small and medium-sized firms, going through the complete Magic Quadrant or Forrester Wave assessment cycle is always going to be a tense and nervy experience. But after the preparation, the effort, and the waiting, many are bound to end up disappointed.

Not everyone can be a Leader. Leadership status would be meaningless if too many companies shared that sought-after positioning in the upper right quadrant.

And, though there’s room for more research to quantify just how much commercial difference a good assessment makes, we all know that it can change a company’s fortunes in a matter of months.

So it’s inevitable, in the aftermath of publication, that many vendors will be unhappy. And, at some stage, every one of them is going to wonder just how fair and objective the whole process has been.

Yes, it’s biased in favor of bigger corporations

There are a lot of issues to be unpicked there – and we’ll come back to several of them in the next few months. But, for now, let’s start with the most basic question of all.

Does the whole assessment process have a built-in bias against smaller companies?

Let’s put it another way. Does being a small or medium-sized firm mean that you will always be pushed aside or trampled underfoot by the Oracles and SAPs, the IBMs and Microsofts, the HPs and the Salesforce.coms of this world, the big players that seem to take their places in the Leaders quadrant almost by right?

Well, there’s no point sitting on the fence about this. The answer’s yes, unless you take specific action to avoid it.

Let’s be clear. We know the research and analysis industry inside out and backwards. Our first seven employees have a total of 80 years’ Gartner experience between them. And we know how it all works.

There’s no deliberate bias. There’s no payola, pay-for-play, glad-handing, back-slapping trading of favors here. No bribery. No corruption.  No you-scratch-my-back-and-I’ll-scratch-yours. It’s all a lot subtler and more innocent than that.

It’s more a question of who is there in the analyst’s eyeline all the time. Who is seen at every conference? Whose product launches are unmissable in the trade press? Who makes the news with acquisitions and government contracts? Whose products, because they are known and recognized by everybody, provide the vocabulary and benchmarks for every discussion within your sector?

It’s nothing half a million or so couldn’t fix

These are all factors that are best addressed with the application of vast marketing budgets, running into the tens of millions of dollars each year.

But there is one other, equally important factor, that is not so obvious to the outside world. These companies have almost constant interaction, at all sorts of levels, with the research and analysis organizations. And they employ dedicated teams of analyst relations specialists to make sure that keeps on happening.

You don’t have to be Oracle to have your own analyst relations squad. Maybe three or four full-time staff, if they knew what they were doing, could deliver all the exposure you needed. But how many smaller firms can afford the fully-loaded costs of employing three or four extra people? What are we talking about? Maybe a half-million dollar budget line. Maybe a bit more.

If you have the luxury of this kind of dedicated team, with a decent budget for implementing the ideas that seem important, you can pretty well guarantee getting the right people from your company in front of the right analysts at the right time. If you don’t have the money and resources to act like this, you’re facing an uphill climb.

The smart alternative? Bring in the specialist skills you need

The fact is, the bias against smaller vendors is there, baked into the system. It’s a huge, unintentional bias – and we’ve seen plenty of instances of conscientious Gartner and Forrester analysts deliberately working to counteract its effects.  But that’s not something you can rely on.

What you can do – and this is why The Skills Connection developed its unique MQ Tune-Up program – is make sure you have access to the skills and thinking that those costly inhouse analyst relations teams confer on the big players.

We can’t level the playing field completely and put you toe-to-toe with SAP or Salesforce. But we can draw on our industry background to help you plan and prepare for your analyst assessments.

We can coach you and test your arguments, tell you in advance what evidence you’re going to need, and help you fill in the questionnaires. We can roll up our sleeves, create winning slide decks, and help you develop demos that tell clear and memorable stories. We can give you access – on an affordable, part-time basis – to some of the shrewdest award-winning analyst brainpower the industry has seen.

If your company is destined to grow, maybe you can afford to wait until sheer scale puts you into a position to compete for the Leaders quadrant. If you’re in a hurry, we can help to tip the odds in your favor right now, over the months leading up to the next MQ or Wave.

 Are we on target? The last thing we want is everyone agreeing with what goes into this blog. After all, if you don’t disagree with some of the points we’ve raised, we’ll be forced to be more and more provocative, and who knows where that will end? So let us have your thoughts. Are we being unfair when we say there’s a huge bias in the way assessments are done? Shoot us down and have your say.


Are We All Insane?

January 9, 2010

Albert Einstein is famously credited with saying that “Insanity is doing the same thing over and over again and expecting a different result.”  When it comes to training initiatives, are we all acting insane?

According to Neil Rackham (of SPIN fame), without practice, review and reinforcement, 87 percent of skills taught during training will be lost within 30 days. If training initiatives aren’t linked to an associate’s development plan; if managers haven’t been trained on the skill and on the associate’s appropriate post-training practice, review and coaching; if the skill hasn’t been marketed as important to the associate’s job performance, development  and advancement… then when it comes to training, it would appear that we are all acting insane.

How to regain our sanity

It’s been our experience that when a training activity is completed, it’s important to recognize that the training initiative is not. Training we see as happening in 3 stages. The first is the marketing of the training before the activity begins. The second is the training activity itself. The third is where the learner has an opportunity to practice and hone their skills under the watchful eye of a manager, trainer or buddies/peers that have already mastered the skill.

At the conclusion of a training initiative, training retention rates can be raised significantly when learners identify the things they want to work on over the next 30 days. Action plans need to be specific and shared with a manager, who can then give the learner an opportunity to practice the skills and provide constructive review and coaching. For example:

  • What are the things that you most want to work/improve as a result of this Workshop? Why?
  • What are the three specific actions you will take as a result of the training? For example, “I will <specific action> by <specific date> because <reason why I will do this/the benefit>.

Best practice for us is that after 30 days, the associate and their manager review the results of the practice and coaching effort. The review session ideally begins with “as a result of taking the actions indicated in the Action Plan, and after practicing the skills, I have achieved the following…”. How does this match to your experience. Are there other best practices that you have?


The Rule of Four

December 4, 2009

So there you are working on your next great presentation. The story you have has the power to move mountains. But how do you get your audience to appreciate the story, understand it, internalize it and, hopefully, remember it? A big part of the answer is the rule of four.

Seven?
Back in a famous 1956 academic paper George Miller posited the idea that short term memory had a capacity of about “seven plus-or-minus two” chunks. That is we can juggle about 7 items of information simultaneously when considering a proposition. From that proposition has come a million (or probably a billion) presentation charts with seven (plus or minus two) bullets in the belief that people would be able to absorb that information and understand the “mountain moving” story.

Four NOT seven
The bad news is that it turns out the magic number seven works if what you are trying to keep in short-term memory is, say, seven single digits – but seven concepts – no way! More recently Professor Nelson Cowan (of the department of Psychological Sciences, at the University of Missouri) has undertaken a range of studies that show the actual number to be four. Charts and diagrams that show a maximum of four comparative concepts work well (e.g. Gartner’s magic quadrant, the BCG growth matrix) and people remember them and can apply them. Once you go over that number you have lost them. Actually, if you read David Rock’s excellent book “Your Brain at Work,” you will discover that additional findings show that the number of things we actually REMEMBER … is only one. So why does the magic quadrant or the growth matrix work? We remember them as one entity i.e. a specific diagram.

So take a moment and check any standard decks you use or key presentations that you are preparing. Do you keep to the rule of four? Do you agree with the rule of four? Finally for a great explanation of the application of the rule of four to all of your slide design take a look at Stephen Kosslyn’s book “Clear and to the Point” – in our view a book that is clear and to the point.


There is an ROI for training. Really!

November 18, 2009

A few years ago, when working as a research executive at Gartner, I attended a meeting to discuss initiatives to improve the quality of research written deliverables. As we discussed the issue, to my astonishment, one of the people at the table said that he didn’t think we needed to consider any research writing training for analysts; after all “don’t we hire people that can already write?” I didn’t see that one coming. So today, when I went for my daily Dilbert fix, I was struck with a sense of déjà vu all over again.

Training isn’t a front burner issue, is it?
If clients and customers aren’t complaining, then in this economy, training isn’t a front burner issue, is it?  And, if clients aren’t complaining, then it means that everything is all right, right? Well, consider the following email message we received from a client (by way of a very happy analyst) “I just wanted to pass along that I thought your presentations were very well done this year. Please don’t take this the wrong way, but in past years sometimes you seemed a little less comfortable presenting to the large group, but this year you did a great job. I heard the same observation from several people.”

No training can be more expensive than some training
Companies wouldn’t consider not training salespeople. There is always money for training salespeople. There’s money for new hire training, periodic product training, various sales skills training including selling to VITO, team selling, business acumen, sales methodology, solution selling, questioning techniques, etc. Everyone intuitively understands that it’s easy to justify training salespeople. That’s because sales people are your front line in acquiring, growing and retaining clients. So, the ROI is obvious. But, what about the ROI for analysts? (or marketing staff, or consultants)

ROI for analyst training
While it might not be so “in your face,” the ROI for analyst training in reality is certainly no less compelling than that for sales staff. At the end of the day, who truly wins you clients and who has to deliver the necessary perceived value to keep them? Analyst training ROI exists at several levels.

  • Impact on new business win rates and retention rates – Research needs to be audience relevant, timely, clearly presented and well communicated. This is what impresses people enough to make them buy…  and this is what makes them stay.
  • Brand performance –  Your analysts are ultimately your brand and create your perceived brand value. Brand, of course, sells – just ask Google, Microsoft, Coca-Cola, Porsche….
  • Staff engagement  – Especially in times such as these, management tends to forget that research is 100% a people business. When IP is your business, the level of engagement of your IP creators is everything. Engaged staff create sales and growth, disengaged staff lose customers and kill business.
  • Efficiency –  Ask any analyst what their biggest problem is and they will tell you it’s not enough time to do everything required. When everything is a top priority the only answer here is to help people become more efficient with their time. As one of our course delegates noted “It literally takes me half the time to write a piece.” What is the impact of this type of efficiency to the bottom line of the business?
  • Sales performance –  Frequently we find that the research firms we are speaking with struggle to express their value sufficiently well to clients. Analysts either disengage from any part of the sales process, or when they are supposed to be supporting the process instead they give away the ”crown jewels.” Selling the value of a soft product like advisory services requires a blend of skills between sales and research. Ignore the analyst role in this process at your peril.

There are alternatives
Training is clearly an expense; and, not all training needs to be in a formal classroom setting. Indeed, much valuable training is conducted by peers or managers. And, training can be delivered in e-learning, by webinar and other formats. If you don’t have the resources to develop suitable training, outsource it – it’s generally going to be cheaper anyway (and if you outsource with us – also the best quality 🙂 ). Can you really afford to ignore your most crucial business winning resources?


“Are you being a good parent?”

November 5, 2009

homework-cartoonWhat is the role for managers in an associate’s development? If the manager sends an associate to a training workshop or identifies an appropriate e-learning program .. is it enough?

In our view – absolutely NOT. While formal training in a class room or e-learning setting begins the process, much of the real learning occurs when the associate has an opportunity to practice and hone the new skills.

On August 28, 2006, a school teacher wrote a letter to the New York Times. In it she wrote “Most of the remedial kids I worked with… did not value book learning, and neither did their parents. The reading at home required as part of the program was rarely done. Most of these children did learn to read, but they may never catch up to peers whose parents made sure that there was homework time.”

Are you making sure your associate’s homework gets done?

Are you familiar with the content of the training and the skill? Before the training event, do you make the associate aware of the value you place on the skill and the training? After the event do you give your associate the opportunity and time to practice and hone their new skills? Do you follow up with the associate after the training to discuss the program and the activities you’ll both need to do to ensure that the training sticks? Do you regularly review and provide feedback?

Studies show that when managers speak positively about a training initiative, when they link it to improved job performance and opportunities for advancement, when they speak well of the training program itself, etc., that learners get much more out of the training experience. Because much of the learning happens after the formal training experience, studies also show that managers that don’t follow up after the training for review and reinforcement of the skill, or don’t provide an opportunity for the learner to practice and hone the skill, are wasting 87 cents for every training dollar invested!!

When the topic is associate development, are you being a “good parent”?


It’s the story, stupid!

November 2, 2009

In his New York Times Op Ed column (October 31, 2009), Thomas Friedman poses the following question “… How is it that a president who has taken on so many big issues, with very specific policies — and has even been awarded a Nobel Prize for all the hopes he has kindled — still has so many people asking what he really believes?”

Friedman follows with “I don’t think that President Obama has a communications problem, per se.…  Rather, he has a “narrative” problem. He has not tied all his programs into a single narrative that shows the links between his health care, banking, economic, climate, energy, education and foreign policies…. the president’s eloquence, his unique ability to inspire people to get out of their seats and work for him, has been muted or lost in a thicket of technocratic details. His daring but discrete policies are starting to feel like a work plan that we have to slog through, and endlessly compromise over, just to finish for finishing’s sake — not because they are all building blocks of a great national project. What is that project? What is that narrative?”

Its the storyIn other words, “what’s the story?”

While Friedman might say that “People have to have a gut feel for why an initiative, with all its varied nuance and complexity, is so important — why it’s worth the effort,” in their book, Made to Stick, the Heath brothers provide a prescription. That is if you want your story to be memorable, you need to create a narrative that is simple, unexpected, concrete, credible and emotional. And the story needs to be a good one.

It’s the story, stupid

Our brains are programmed much more for stories than for abstract ideas. If you want to sell a product or vision, or move a country, as one, into the 21st century, if you want to be effective, you must create and communicate a relevant, compelling narrative – a story.

Are your presentation stories and messaging compelling? Are your narratives simple, unexpected, concrete, credible and emotional? Have you tumbled to the conclusion that when you get in front of a customer or client that “it’s the story, stupid?”


Many a slip ‘twixt cup and lip

October 23, 2009

Yesterday I attended the annual Richard Lambert lecture organized by Harvey Nash & The Sunday Times. Richard is the Director-General for the CBI, the premier lobbying group for business in the UK. The theme of Richard’s talk was business reputation and the damage that recent events have had on businesses overall and business leaders specifically (and not just banks!).

richard lambert CBIThis is a key issue moving forward for any company as the breadth and speed of communication get ever wider and faster. A comment made in private by phone may appear on a blog and be under intense discussion through Twitter in a matter of seconds. If you don’t believe me just check the web stats for your business and look at the sources for people coming to look at your site. More and more these are not referring sites but blogs and Twitter posts!

How well and consistently you communicate your messages is ever more crucial. Just one slip ‘twixt cup and lip and soon everyone will know about it. Consistency, consistency, consistency – the golden rule for all of your market messaging?


The Young and the Neuro

October 15, 2009

In his Op-Ed column on October 13 in the New York Times, David Brook offered some fascinating observations after attending the Social and Affective Neuroscience Society’s conference in New York City. Brooks said that when he spoke to the attendees, he felt that they were near the beginning of something long and important.

In the article he cited studies such as the one that scanned the brains of Yankee and Red Sox fans as they watched baseball highlights, and discovered that “neither reacted much to an Orioles-Blue Jays game, but when they saw their own team doing well, brain regions called the ventral striatum and nucleus accumbens were activated.” brain1According to Brooks, “this is a look at how tribal dominance struggles get processed inside.” Although my takeaway, as a lifelong Yankees fan, was “I didn’t know Red Sox fans had brains.”

On a more serious note, there are lessons to learn here. That is, over time, as we discover how we process information, what happens in the brain when people are persuaded by an argument, as we understand how we work in and across groups (tribes), as we learn how we get motivated and de-motivated, as we learn how we perceive other’s feelings and pain, and the role culture plays in all of this, we have an extraordinary opportunity to dramatically improve how we sell, market, manage and motivate.

While this seems like science fiction or something that can be applied only far off in the future, cognitive neuroscientists like Stephen Kosslyn, in his book, Clear and to The Point, are already explaining how business people can use some of these discoveries to enhance their ability to communicate complex concepts. In our view the “age of the brain” is just beginning. Do you agree?