- Ecom with Jon
- Posts
- Ecom with Jon - August 18, 2024
Ecom with Jon - August 18, 2024
What I learned this week - Software ROI
Here’s what I learned this week
I feel it disingenuous to promise results.
Yet everywhere I look, this has become the name of the game.
Outlandish promises not backed by anything data related.
We’ve all just become a bunch of used car salesmen willing to say anything we can to get a deal.
This bothers me.
FTC is cracking down
This is worth a read, it’s giving the ability to go after anyone that is trying to manipulate public opinion through fake or dishonest means:
This will probably impact your favorite social media influencers as well.
Also look for some changes coming to the way that testimonials and case studies are presented to in the near future and look for more scrutiny on how marketing is done.
The Evolution of Sales
Perceived value for software is at all time lows and quickly racing to the bottom.
It’s a weird place to be in right now as was highlighted by a recent exchange with a potential client.
Potential Client: "You're too expensive at $1000 a month"
Me: "How much do you spend on Facebook?"
Potential Client: "$20-30k per day"
Me: "What's your AOV?"
Potential Client: "$126"
Me: "So less than 10 extra sales per month and you're ROI positive or put another way, 5% or less of one day of ad spend and your get a full 30 day trial for free? I'm not following the logic on this one."
The problem that I have is that I’m a rational and logical individual that’s able to look at things from a perspective of
Does it work, how long to tell, what are the downsides?
Will it makes us money?
How hard is it to unwind?
What does implementation look like?
In the above example it’s related to our popup software at Formtoro.
Implementation time on the client, 10-15 mins max, the agency partner can handle the rest, I drafted the first version of their forms too, so total time needed for that, less than 30 mins.
Unwinding it takes less than 2 mins.
Collectively, it can be completely setup, tested, and unwound in less than an hour.
Again this is me being logical and rational.
If I can set something up in less than an hour, test it for free for 30 days, like the features presented and follow the logical argument set out to improve what we’re currently doing, I’m willing to test.
In fact, I regularly test things in this fashion.
But I can’t help but feel I’m missing something.
This is from my LinkedIn post the other day and there were some good comments on it, some stuck out:
Byron Hill had an interesting comment:
“Not sure if this would be helpful, but my agency used to be in a similar situation.
Opportunity cost is a real thing and so is loss aversion.
What we found is that using actually revenue numbers from their backend and giving a modest guarantee (3x ROI) worked quite well - it was believable and we had the numbers to back it up.
We actually found minimal difference in close rate when adding in the free trial.
We found the most important part here is educating through the numbers.
Showing them that a 3X ROI which we get 99% of the time would make them more profitable because they don't need to increase ad spend, fixed costs, etc. to get it showed the value extremely well.”
So what Byron doesn’t know is that’s actually how the conversation started with this client, we walked through the numbers and the expected gains complete with real client data and numbers.
But data is tough for people to see and to understand, it could also have been just poor timing for the client, they insisted on a call at 11pm their time after traveling all day and ordering room service in their hotel in the background.
Now this is interesting from an agency perspective and leads to a question of positioning, as software we could do something similar.
Charge straight away with a guarantee of 3x ROI on our costs or your money back when implemented correctly.
For any larger brand we would make this pretty quickly but there’s an issue with the way to properly test for this and inconsistencies with existing software.
This came up in a conversation with Josh Tay the other day, where they were running an exact duplicate of a form using 3 different vendors popups and getting similar results from the external vendors but different from the internal platform vendor and couldn’t figure out why.
We’re looking into it, but early feedback and experience tells us that external vendors are actually not targeting as restrictively which is propping up numbers that aren’t actually real.
Most standard technologically motivated people would conduct and audit and be able to spot this.
But you would have to know what you’re looking for.
Which brings us to the next question…
What counts as ROI?
This is a question I’ve started to ask and being in the popup space it really is about incrementality, but you can’t just claim credit for someone that signs up to your popup forms, that would be madness saying that you drove that revenue.
It’s simply not true.
We created a full 90 day evaluation plan for people that want to try Formtoro.
It encompasses a starting point, a means of systematically testing our effectiveness, but it doesn’t have a comparison of how to evaluate other vendors, if we go like for like forms with some we’ll look similar but others that aren’t as strict as we are will be showing inflated subscriber numbers because they lack the appropriate and necessary targeting solutions.
We know this because we quantified that about 20-25% of subscribers that come in through external platforms are actually people already subscribed, not counting the amount of people that subscribe using different email addresses.
In short, the whole idea of tracking subscriber count as a KPI is usually an utter and total waste of time.
But it keeps coming back that opt-in rate and number of emails is the gold standard in ecommerce.
Again, this is my logical and rational part coming out that says if the numbers don’t make sense and there’s too much dispersion between a popup makers claims and what that same popup internally does with your ESP, then something isn’t right with the targeting or exclusions.
Antidotally the easiest way to tell this is to write down every time you see a popup after clicking on an email from the brand.
If someone is subscribed to marketing emails they shouldn’t be seeing a popup as their id is passed with the email’s click tracking system.
So this puts me in an awkward place as a honest software vendor, we build for the customer journey in a way that takes into account all these things where as a lot of competitors in the market do not.
“It’s a feature not a bug.” comes to mind a lot while reading through claims that don’t make sense.
How can a brand go from a 4% opt-in rate to a 10% opt-in rate just using another vendor with the same exact setup?
You can’t.
It’s not possible.
The irony is I here a lot of marketers just say, I don’t know how it works but it works.
This is the same as someone watching a magic trick and saying I don’t know how it works, but I saw it.
There’s always a reason why something happens, everything has a logical explanation.
Ok so most brands are tracking the wrong KPIs and seeing “better results” using 3rd party software for email popups v. their internal popups with their ESP.
It’s likely that instead they are seeing more popups shown to people that have already signed up which is boosting their numbers with rarely an audit to see the actual counts that are going out in welcome series v. what the popup software is claiming.
Trust we know, we were tracking this internally for a bit with pretty accurate results.
So how do you prove ROI when everyone is attributing in ways that conflict?
What does true return on investment look like?
Proper Incrementality Testing
The 90-Day plan referenced above would be the cleanest way to test the impact of using our software and strategy.
The problem is everyone we talk to wants to jump to Stage 3 rather than work through Stage 1 and Stage 2.
If you’re looking for quick results we know that jumping to the last step will yield them, we’ve been testing these things on accounts that are larger than most.
Which brings us to the next problem…
The other issue with this is that it takes a lot of data to make an actual analysis of whether or not something is working.
We know because we’ve tested this on websites with a lot of traffic and 50k sign ups a week.
That’s a lot of signups, more than most stores entire lists in a week.
But you need these kind of numbers to normalize numbers and get rid of outlier behaviors.
That’s the rub. The foundational knowledge and strategy works. It’s battle tested.
The reality is testing requires a lot of math. More math than most brands feel comfortable with doing as things vary by ad spend, traffic, quality of traffic etc.
It’s super hard to do this stuff without a lot of traffic.
So we hypothesize then are lucky enough to be able to run tests with our large clients to see results.
When we’re reporting on results though, you have to naturalize the numbers relying instead on percentage data to remove bias as best we can around traffic numbers etc.
In short, it works the way it should, but you shouldn’t just take my word for it, I’ll need to add this to our website next week, just need to figure out the best format for it.
The results of Auto-Apply Coupon Codes in terms of Incrementality
We’ve been running a test on the website above that does 50k signups per week.
They already collect data, a lot of it.
All things being equal there were two things changed between the forms.
Coupon codes were auto applied to the cart
Forms were rerouted to a step telling people their coupon code had been added to their cart if they dropped off during any step beyond providing their email
Sign up rates themselves had very little statistical difference which is what we’d expect to see as nothing was changed on those steps.
Our hypothesis was that AOV would go up because people would see the discount in their cart as they shopped, orders would go up which coupled with an increase in subscription to conversion rate.
This proved out across the board.
Same amount of traffic, higher AOV, more revenue, more conversions - the trifecta of how technology can increase performance.
I’m not a tease, here’s the sheet with both Raw Numbers and Naturalized Numbers:
Auto Apply Test Results Sheet
The truth is I need to do more of this, but you need really large data sets to do this properly.
I recommend clicking the link above and looking at the Naturalized Number Tab, if you’re on mobile, you’ll want to wait until you’re on a computer.
Here’s a summary for those on mobile.
I’m going to break these down by form because this stuff does matter.
Homepage Forms - Mobile Devices
Gain of revenue of $26,946 or 18.62% with Auto-Apply
Increase AOV of $2.50 per order or 4.49% with Auto-Apply
Increase in orders by 351 or 13.53% with Auto-Apply
Increase in subscribers who converted by 2.93% or 13.04% gain with Auto-Apply
Homepage Forms - Desktop Devices
Gain of revenue of $8,133 or 11.50% with Auto-Apply
Increase AOV of $1.40 per order or 2.51% with Auto-Apply
Increase in orders by 111 or 8.77% with Auto-Apply
Increase in subscribers who converted by 2.30% or 7.71% gain with Auto-Apply
Product Page Forms - Mobile Devices
Gain of revenue of $17,741 or 16.40% with Auto-Apply
Increase AOV of $2.50 per order or 4.37% with Auto-Apply
Increase in orders by 218 or 11.52% with Auto-Apply
Increase in subscribers who converted by 1.35% or 5.59% gain with Auto-Apply
Product Page Forms - Desktop Devices
Gain of revenue of $7,751 or 15.36% with Auto-Apply
Increase AOV of $3.00 per order or 5.49% with Auto-Apply
Increase in orders by 86 or 9.35% with Auto-Apply
Increase in subscribers who converted by 1.53% or 4.79% gain with Auto-Apply
Total Gains for a single week modeled:
$60,570.76 additional revenue
767 additional orders
Monthly estimates modeled:
$259,588.98 additional revenue
3287 additional orders
Total time to make the changes to their forms? 20 mins.
The above results are before any data is used to improve the quality of audience or the customer journey.
That means with literally no changes to the customer journey there are massive gains.
If we were doing a pure ROI on just this change we’d be around 52x.
So conversations like the one above sometime confuse me, I could guarantee results, hell we could even do it for people soup to nuts and charge probably 10x the amount we do, we’re that underpriced v. the value that we provide.
I could even work out a pure performance play for those that didn’t want to pay upfront, but they would quickly regret that deal.
So this is kind of where I find myself a little lost on pricing, perhaps this is the missing piece of being able to explain the impact then letting people put in their own numbers around subscription rates etc for their single forms.
A lot of people don’t really have this level of data.
Big thank you to Klaviyo for adding it finally, maybe brands will have the information they need to plug it into a calculator, but either way I’m going to be reminded by my friend Soma of how much you really need for statistical significance. HINT: It’s likely more than you have the ability to collect.
Summary
You need a lot more data than you think to know if something is or isn’t working, but you can work with software that has access to running tests on much larger data sets to tell you quantifiably what is better for the customer and will result in more sales.
Most brands might not see the same results because of the ability for a few variables to grossly impact outcomes of tests if they aren’t huge.
This data set from Soma is less than 3000 opt-ins. The data above was modeled from millions of form views and nearly 150,000 subscribers.
So this is where I kind of lose my mind a bit, because everyone thinks they are much bigger than they are and they think they know what works and what doesn’t. The fact is, most just will never have access to the traffic or numbers needed to actually know if a test is working and instead hyper focus on all the wrong things.
Sometimes you need to just trust the process.
If you’ve made it to the end of this newsletter and learned something I’m glad, if there’s a specific topic you want me to cover, let me know.
If you literally want free money for your brand or clients, even though you might not have the volume we test these things on, I can pretty much guarantee you through data you’ll have a pretty decent ROI with minimal lift without even the data collection part.
It’s literally free money.
The Takeaway
This was a fun one, I like diving in and looking at data and knowing the positive impacts we have on brands through building technology that actually focuses on conversion rather than just opt-in rates.
If you found this valuable, forward it to a friend and tell them to subscribe.
-Jon
Catch up on past posts: https://ecomwithjon.beehiiv.com/
You can learn from me: jonivanco.com