A weekly Google Hangout dedicated to discussing content marketing, search marketing, SEO and more.
Topic: Finding the right search tools for your business. Knowing the search tools to use as part of your organizational workflow.
Ray Grieselhuber, Founder & CEO at GinzaMetrics
Erin O’Brien, COO at GinzaMetrics
FULL VIDEO TRANSCRIPT
Erin: Hey, everyone. Welcome to FOUND Friday. As always, I’m your host and participant, Erin Robbins O’Brien. With me, I have Ray as well, CEO and founder of GinzaMetrics. We have a lot of interesting stuff to talk about today; some things from conversations that we’ve had over the past week or so, as well as conversations from industry topics.
Ray, I wanted to kick it off with following up of the discussion that we started yesterday afternoon around different feature sets for tools in the industry. As we’ve seen, the SEO business has been around for a little while now and some different tools have developed differently over that time. Social analytics, monitoring, and measurement have become a thing.
Also as content marketing has really started to emerge and tools developed around that, you and I were chatting a little over about what really separates different tools and what is best for who. We talked a little bit about somebody having a lot of shallow feature sets, a very baseline set of analytics versus some really deep, lots of data, lots of analytics of things available, and how that really affects user experience and who the target audience for that might be.
So maybe you could question to start out with is: what’s a minimum feature set for someone? If we‘re going to call something shallow analytics versus deep analytics, how shallow can you get and still be a product?
Ray: You’re talking about the SEO industry, specifically?
Erin: Yeah. Let’s talk about SEOs. Let’s stick with that.
Ray: I’d say probably in SEO, the classic minimum viable product is probably rank tracking of some sort or another. Like, “What’s my rank on Google?”
Maybe another example would be an audit. HubSpot, for a long time, has had their website grader thing. I think it is a great piece of content for them and it’s a tool that they use (I’m imagining) to generate quite a few leads for their business. And it’s one of the things where you basically plug in a site URL and spit back a bunch of recommendations: fix your title tag, optimize your Twitter followers, that sort of thing. I would say one of those two things is probably the canonical example for minimum viable tool in the space.
Erin: If we’re talking about minimum viable tools providing say rank tracking and things like that, with the way that the Internet is changing from where we were especially five to seven years ago when tools first started to emerge, would you say that a minimum viable product now needs to have something at least weekly and preferably daily? Does timing create minimum viable product now?
Ray: It’s a really good question. The interesting thing about minimum viable is, a lot of times, you can get away with ad hoc. I would say the website grader is a really good example of an ad hoc tool where there is no concept of daily or weekly. This is just one of those things you run on a one-off basis.
Rank tracking is a little bit more interesting. There used to be a ton of spot check rank tracking tool where basically you go out and say, “Hey, show me my rank my list of keywords that I input.” And those are probably still out there but those are dying out, I would say, in favor of something that does track you more consistently. We obviously would argue that, at the minimum, you want to go to monitor things daily although a lot of people still are in this weekly monitoring mode where they’re still trying to get a handle of what’s out there.
Erin: One of the things we talk about a lot, with Ginza specifically, because we work in a space with lots of other players, is that our tool has a lot of depth of stuff but that we don’t necessarily expose it all right up at the front and into the UI because a lot of people find that to be overwhelming. Or even just data that they don’t need at a glance on a regular basis, on a daily basis but then if they wanted to dive deeper, they can.
For people that are really looking for something with a ton of analytics and feature sets, do you think that there’s a reason that that’s good or a reason that that’s bad to say like, “I just want something that checks off every single box of every potential analytic that I could ever have.” How many people do you think are really using most of that?
Ray: That’s where things get really interesting. We’ve been in this industry for a while and I think the challenge for us as a company – because we’re more on the enterprise side of things – is we need to have a fully featured platform, but at the same time, one of the key things that we’ve always strove to do has been to make the user experience as easy to use as possible. We recognize that training budgets are usually not there when implementing the new tool. Also, because of the way we distribute our platform, a lot of people commend and try us out without necessarily talking to a salesperson first.
We have this challenge of making the tool as easy as possible to use. There are other vendors in our space that are in the enterprise space as well, and they tend to have support from a sales team or an implementation team in order to enable them to close those deals. That’s a great alternative viable model. It sets up a different set of financial assumptions and everything else.
But what’s interesting there is companies in that space and that taking out a product tend to probably advertise more of the complexity of their product or the richness of features of their product versus the simplicity and ease of implementation. It’s one of the things that the way you distribute your product in terms of whether it’s sales or inbound leads to trials, it really has an impact on almost every aspect of the way you design your entire user experience and the way you get people onboard it.
Erin: Do you think that there’s still craving to this idea that picking the most expensive thing that checks off the most boxes in terms of hundreds of different analytic sets is a safety net for people that nobody ever got fired for choosing well known brand X. Do you think that this is still a thing? Or do you think that people are really more, “I just really need these specific features,” or “I am more budget conscious,” or whatever this is?
Ray: I think it’s totally a thing and it’s one of the things that gets perpetuated by both customers who are building these requirements lists and sending out RPs with any possible feature that you can imagine being checked off to the vendors themselves. They put up buyers guides. It’s one of those things that’s totally understandable. If you also have these third party evaluation tools, you have reports put out by companies like Forrester and so forth. A lot of those parties have really vested interest in building those checklists because it feels like you’re creating a very thorough review of things.
We’re starting to see more maturity come from customers and the market in general. A lot of times, you’ll need lots of different things and a lot of companies will need potentially maybe 80% of the lot of that functionality. But what you really need to do is focus in on what’s going to matter for your company.
So whenever I’m talking to a potential customer or partner, one of the first things they’ll ask is, “We understand that you’re looking at three or four different platforms. What are the top three or four things that are really going to make the decision for you?”
A lot of times they’ll say, “That’s a good question. I have even thought about that. I’m just looking at things and trying to figure out what feels the best.” And it becomes a very intuitive process based on having a baseline of all of these checklists of features.
We’ve been around now for a while, but in the early days of Ginzametrics, coming up against those checklists was always a struggle for us as part of our journey as a startup because there are companies who have been out there since 2007 that have had time to build these checklists. We’ve been around now for almost four years so we have a lot of those features, if we were to check those off. But we still try to focus in on what the core value is going to be for each individual company.
We try to be really honest to people. A lot of times, it comes down to what is ultimately a good fit for you. We all have essentially a lot of the same features, but then every platform has their own differentiating factors. We’re not going to try to hard sell you on something that’s not going to be a good fit for you. So it’s important for both the vendor and for us to really get them to focus in on the top few things that really matter to them.
Erin: One of the things that I brought up the other day – in that kind of data as more than just a spreadsheet article that I really want to drive home and I wish people would focus on more – is the best tool is the one that you’ll use. I don’t mean that in the order that it set is. If you’re not going to use something very often because it’s clunky, hard to handle, difficult UI-UX situation, requires too many steps or doesn’t live natively on the devices that you need it to – you can’t access it from the tablet, you can’t access it where you need to – that tool, no matter how many features it has, is not going to be the best for you because it’s not going to be one that you actually utilize often.
One of the things that we focus on a lot that you and I talk about regularly in product meetings is this discussion around what it is that people actually need to do. When they are looking to purchase SEO content, analytics software, if they’re looking for this kind of a platform, what is it that they’re going to live in day to day? What’s going to be the thing that they wake up in the morning and check and they ask, “How do I measure the health of ?” and then insert whatever it is that they’re looking for?
Talking a little bit about this idea of soft data that seems to be really difficult because I know that the comms profession struggles with this a lot, which is a lot of things are considered conversational data points. These are conversations with people at conferences. These are public relations responses to things that people have put out, blog comments, things on Twitter, etc. They ask, “How do I measure this beyond just counting of tweets, counting of Facebook likes, number of blog posts and things like that?” Or even sentiment analysis, positive, negative, or neutral, beyond that.
I’ll argue that I think the point is, measure everything as much as you can. Start adding up everything and putting into a format you like. Whatever that format is, you want to write it down on a spiral bound notebook, I don’t care, as long as you’re going to go through it. But what you’ll eventually see is real patterns in all this stuff, if you watch it for long enough.
If you read enough Twitter comments, you’ll see, “Hey, it turns out people don’t like this color in the graph because for some reason when you’re looking at it on the phone, it doesn’t show up very well. Maybe we should not use light yellow in a graph,” or “People are really trying to find this and they can’t find it on the website. They keep having to call or e-mail in. We should add this to FAQs.”
There’s a lot of that going on and that brands and products around the world are struggling with how to make sense of soft data. We’re getting ready to come up with some stuff that we think is a good first step in terms of looking at your data as a whole and across a lot of different platforms.
I’m wondering – we talked a little bit about this last time – who the real audience for things like this is. Do we think that this is considered shallow data or do we think that this is deep data? Or do you have to start somewhere? Do we have to start shallow and then see where we dive deep? I know that this is an ongoing product discussion for you and I, so we’re getting two things in one today because we’re actually getting to have a product discussion during the show.
Ray: It’s really interesting as a product builder trying to figure that out. I think startups, by their nature, their number one prerogative or number one imperative mission is to move fast and you’re going to break things along the way, but the reason you move fast is you may have all the best ideas in the world but you don’t really know what the most value is that you could be adding to your customers until you have a chance to actually get that feedback from them. Our purchase has always been to numerous things, even if they’re only a small chunk of what the longer term vision is, just because it helps us from both the technology perspective, from a code management perspective to get things out there and keep moving forward. And then getting that feedback from the users and incorporate that more usefully into what we’re doing. A lot of times, it will make small adjustments on our longer term vision.
It’s interesting as someone who’s objectively looking at different products out there, a lot of times you’ll see new feature releases. They’ll be cool, but a lot of times it will look a little bit shallow at first. I think where companies really start to differentiate themselves between the type of company that is just building a bunch of checklist features versus companies that truly have a unique vision around what they’re doing and are willing to pursue that vision.
On one extreme, you have the approach of – you’re going to release some new feature. It doesn’t do everything that you might want it to do, but at least you can check some box off saying, “Yep. We’ve got it.” The question is: do they go back and further hone that release or do they just move on with the next one?
Every company probably does a little bit of both over time. But I think you can start to see patterns over the long term where you just see companies that are building one checklist feature after another and then moving on with the next one. It’s hard to see a refinement and a unity among all those features and how they really play together into some sort of cohesive vision in the product.
Erin: It’s hard to explain or onus on the customer to understand that working with a company that is quickly releasing features, and of course they may not always be super deep and there may be some bugs initially and everybody’s working things out, but a lot of times companies that are little bit smaller and more nimble do have the flexibility to bend and move more quickly with the industry but that there is a cost to that. While we may be less expensive than some other larger tools, we also do release things quickly and maybe that requires some fixes sometimes. Maybe we do release a shallower version and then deepen it up later if it seems like a tool that people really want to adopt or if it seems like a feature that people really love.
How hard is it to really explain this to customers? It’s difficult to bring up in a sales call or on a demo call. We’re getting ready to release this feature and we see it cohesively tying in to this larger picture. We see this deepening and this is where we want to go, but of course there may be some bugs along the way. One of the benefits of us being less expensive is we’re nimble. We have a small team, but this also means that we’re not 7000 developers strong, developing a million things in tandem.
When you chat with people, do you try to broach this subject with them at all? We’re going to release stuff and maybe it’s not the full capability of that feature all at once, but we’re working on it. And yes, it may require some refinement and there may be some bugs, but it’s because we’re moving quickly to get things to you.
Ray: Absolutely. Frankly, some buyers and some users are not going to be okay with that. Certain companies, they want the kind of the IBM of whatever product they’re buying and they want something that maybe took two years to build but has every single possible variation on that product.
Those types of companies are harder for startups to deal with. It’s one of the things that companies that have raised tens of millions of dollars and can spend a year on building a feature are better suited. And that’s fine. That’s a totally a valid thing.
As a startup, it’s one of the things that we’ve never really enjoyed doing because I don’t think anyone at our stage really wants to spend a year on doing anything. It’s really more a matter of building the audience over time and working with those companies that are looking at doing newer things and doing things faster. But there is always this balance of feature maturity, feature richness versus moving quickly and adding more to the platform over time.
Erin: Well, you know me. I’m climbing the walls after a month of trying to do something.
Ray: I am too. It’s just the nature of the business we’re in. It’s a really interesting discussion because there’s no real concrete place where you can land. And we can’t really come down on either side and say this way is better in every single case than another way because it really just depend on what specific feature you’re talking about, the company in question, your corporate market strategy, and your users of course.
Erin: Interestingly, my mind immediately says that people who are looking for shallower feature sets are probably the people who are okay with the faster releases and the mess-ups but I actually don’t think that that’s true. Because we have some customers that are super, super deep in the weeds of using the tool and they love it when we’re throwing things their way, even if it is a little buggy at first.
I would actually say that the people with the shallower feature needs maybe have it a little bit worse because they actually can’t dig into it. You throw out a new metric number that’s just on the dashboard and because there’s no digging in or context around that, that if that piece breaks down or is incorrect, that’s a huge part of what they’re looking at every day and they’re screwed.
I want to cover a little bit of this too. We’re talking a lot in terms of whether or not people are cool with a lot of this things trying new stuff out. Who I think probably have the worst time with this is agencies because they have to deliver stuff to clients. If you’re an internal SEO or content manager or digital strategists somewhere to brand, you have some leverage to play with things and look at reporting and do things your own way. On the agency side, maybe they got it a little bit more difficult. I think that there’s less wiggle room for trial-and-error feature sets probably on that side.
Do you see that agencies will go with the safer pick most of the time?
Ray: It’s been with the agency a lot of times. So I think agencies are in an interesting position where they need to report back to their clients on certain things. It’s in their interest to try to get as much coverage on the types of things that can automatically pour back to their customers because the more value they can show that they’re doing and the more they’re able to do that at zero incremental cost to them by nature of getting yet another module on their automated report that they’re generating, that really helps them. For them, to the degree that they’re able to just show something, it’s something that they can point to their customer and say, “Look, this is the type of thing we’re managing for you so you don’t have to do it.”
Incentive-wise, it makes a ton of sense that they would do that and that would be in their interest to do that. We’ve worked with quite a few agencies and that’s one of those areas where we really had spent a lot of time building up certain types of features in order to support them in that. If you understand that as a vendor and you’re willing to make that investment, then it works out in the long term.
Erin: I would love to know from agency folks, how often are they actually sharing dashboard access with their clients versus sending over static PDF style reporting. We’re dropping things into PowerPoint or Keynote doc text. That’s a really interesting option because dashboards that are updated live in real time give both parties that feeling of control, like they can log in and check up on stuff at any time and not beholding to whatever these decks say.
But there’s also some level of mitigated risk, whatever tool may be in there, somebody may have put something in wrong. We had an example months ago where someone had mistakenly gone in and updated their dashboard’s dates and filter settings, and then the client had gone and logged in and was asking, “What happened in here? Why does it look like we have no data?” It turned out it was a very simple thing. Somebody had just changed all their filter settings, basically giving them nothing.
Things like that happen. I think that that’s another really important consideration when picking a tool. Who’s going to use this thing and how savvy are they? Can you have multiple access points like an admin person and then multiple users? What does that permission setting look like for those folks?
I know that you work a lot with our customers in APAC. In terms of what you see for choosing tools and the conversations we’re having around shallow versus deep stuff, is it different over there than what we see here in the U.S.?
Ray: It is different and it almost depends on specific countries within APAC. In Japan where we work, there is less of a feature discussion now than there was two years ago. Because when we were a younger company, the market itself was younger and companies in Japan actually needed a lot of education around why you would invest in this sort of technology in the first place.
Until we came on to the scene in Japan, most companies, even the agencies, were just using Excel in some sort of a free rank tracker and it would basically just say, “Okay. Here’s my rank data. On one hand, here’s my Google Analytics data. Let’s just smash them together in Excel and send them off to the customers and send them off to the CEO.” That’s how you do reporting. And so we had to spend a lot of time educating people of the value of cloud base system and all the other things that you can do with this sort of platform.
Once we did that, we had the advantage in the sense that we helped create that market, so any list of features that we had were already well ahead of anything else that they had seen so it became almost a [24:11 inaudible] point at that point. So we don’t really run into the feature list discussion nearly as much as we did before. When we did before, it was primarily based on a fundamental not understanding the value of the platform like this in the first place. So then they started asking, “What else can you do?” Once we were able to get over that hurdle, it became a much easier conversation.
Another place where we’re involved, we have a lot of customers in other parts of APAC and Singapore is a growing one. There what you have is, a lot of times, you’ll have regional branch of the companies that are active in the U.S. or in Japan and in other larger areas. They’re much more used to tools that they have worked with, for example, back in the U.S.
Some companies there do have more of a feature list set of requirements in other companies there. As long as they’re getting the specific regional attention they need, they’re pretty much happy with whatever they see. It’s really interesting to see how each individual country within that region plays out.
Erin: Depending on company size and what their team makeup looks like – I want to finish up our discussion today with a question about the difference between functionality and UIs since that’s where we’ve been today. If somebody finds a dashboard or a look and a feel that they absolutely love and they say, “I love the way that this works. This is exactly intuitive to me. This is where I want everything to live.” But it doesn’t have the functionality, all the features that they want versus they find a tool set or a tool that has every feature that they want but the UI just will not work for them. Nothing is intuitive. It’s going to take tons of onboarding time. It’s going to be clunky.
- Which would you pick?
- Is it worthwhile to try to take the tool that has the UI that feels more natural and workable for you and try to work with them on feature sets or is this is too long of a slog?
Ray: It depends a lot on the personality of the user and what they’re incentive for. If they are someone who has less autonomy in the organization and they’re worried about what their boss is going to say, they’re probably just going to pick the one with the crappy UI and the large set of features.
I, personally, and I think a lot of newer buyers out there are looking more for the overall experience, and the thing that’s nice about working with vendors that are able to get the experience right first, is if you’re willing to invest in a business relationship with those type of companies, a lot of times they are going to be much more willing to work with you on building those new features. Those buyers I would say need to understand that no company can build everything overnight. But in the long run, you’re probably going to get a much better experience.
If the company is taking time to really get the user experience right, I think it says a lot about their founders and the way they run things as a business. A lot of times, that means they’re committed to building a quality product and are looking for companies to help and build that further by investing the time and resources to get the list of features right.
If you don’t do that, you’re essentially starving those vendors at a larger scale and you’re really missing out on the opportunity to create a part of the market that probably needs to exist. But if you’re just scared at what your boss is going to say, for example, to put a really fine point on it, a lot of times you’re missing out on the opportunity to really do things in the industry.
Erin: We have this conversation a lot because we ourselves are not only making a tool but because we’re working at a company, trying to do a number of things, using a lot of different tools as well. We have this conversation regularly about a company whose name rhymes with [Schmills Morris? 28:24]. The need for a CRM platform that is not so cumbersome because there’s just such a small group of us. What are you going to do when it takes you an hour and a half to manage one contact? It’s almost ridiculous.
Features-wise, on several occasions we have talked about sacrificing the total number of features for something that’s just easier to use and easier to get people onboard it. Bring it all back full circle. If we hire someone and it takes us two or three weeks worth of expensive $3000 per person training to onboard somebody to use a tool at a 15-person company, that overhead, we can’t justify that. People need to be able to get up and running and be able to understand the tools of the business immediately, if not the same day, same week. Anything short of that is a little much.
All the deeper nuances of anything will take a little bit more time and usability to understand. But you should be basically up and functional with anything that your business uses, unless you are a medical practitioner, a lawyer or in some sort of NASA science field. And relatively shorter, because this is what you have to do every day. That just means that the first few weeks of someone’s employment is out the window because they’re just so slogging through trying to get up and running on tools.
Ray, any closing thoughts on this shallow versus deep feature set situation or anything else you just want to talk about?
Ray: It’s a really interesting discussion because you have essentially three parties in any sort of discussion around this. You have the people who are building these products and I look at lots of startups all the time, and there is really no new industry out there.
CRM is an interesting example. The company that you mentioned so poetically has probably 95% of the market share. I haven’t talked to a single person that really enjoys working with that tool. But there is a reason they have that market share. A lot of times, it comes down to the fact that they have this huge amount of integration with people. How can you argue that that’s an invalid strategy? That’s a product builder perspective versus buyer perspective.
And then you have the people who are financing these companies as well in their early stages – VCs, investors. They certainly are going to look more for these checklist features as well because it creates barriers to entry to other people.
You have two solid examples of why a company would choose a checklist of features over a unified user experience. That is a really interesting set of incentives to deal with as you’re building your company. It takes us maybe some reevaluation and it also takes a lot of insight and visions on behalf of companies and buyers that want to support a higher quality user experience first over anything else.
But it’s really hard to say from a pure strategy perspective what the right approach is. So it’s one of the things that we certainly spend a lot of time talking about and really enjoy talking with our users about too.
Erin: I think you’ve given us next week’s topic for conversation. Even back during CISCO certifications, Microsoft Certified Systems Engineers (MCSE), or [Schmills Morris? 32:18] certifications, things like this – once you say that you know how to use this tool or you’re a licensed administrator of one of these tools or something like that, you also have a very important résumé builder. I would say that for these people that do invest in these trainings and do these kinds of things, there’s definitely a big clause in terms of hireability for certain platforms and things like that.
Next week I’d love to talk a little bit more about what these training situation means and how big of a deal that is for people. Maybe over the next week I’ll ping some people in the network and ask them, “Do you hire people that are already trained on the platforms that you use? What does that look like? And how big of a pro is that when somebody throws on their résumé: I know how to use these things”?
Actually, on LinkedIn, it was funny. We found somebody who claimed to be a GinzaMetrics expert and had put us and a few other people in our network and they said, “I have experience using these tools.” They’re trying to market themselves to companies that are currently using our product. It thrills me, I think it’s great. Also, it’s interesting because there’s no SEO certification for tools out there right now. Same as social media. They have a tough time going on, as well.
Let’s kick off next week’s conversation with a little bit more about that. Ray, thanks for chatting with me. I think this will cut our product meeting for the week in half. So I will chat with you next week. Thanks, everyone.