With the A/B Testing feature in the Tweakwise App, you make informed decisions based on data instead of assumptions. By testing different variations of your merchandising, filters, and sorting against each other, you discover what truly works for your visitors.
In this article, we discuss why you should A/B test, concrete use cases, best practices, and how to get started with A/B testing.
Why A/B test?
Many optimizations in webshops are still made based on gut feeling. Think of statements like: "This layout looks more modern" or "I think more filters is always better." With A/B testing, you test these assumptions by measuring actual behavior. You let the data decide. This leads to:
Higher conversion
Better customer experience
Faster optimization cycles
Less risk when making changes
Concrete use cases
1. Grid layout and visual components
What do you test?
Size of product images
Placement of banners or promotions
Placement of Guided Selling
Why this works: The visual presentation has a direct impact on scannability and click behavior.
Example 1:
Variant A: Standard product sorting
Variant B: Standard product sorting including banners
Example 2:
Variant A: Standard product sorting
Variant B: Standard product sorting including Guided Selling reference/banner
Metrics: CTR, add-to-cart rate, conversion
2. Campaigns and merchandising rules
What do you test?
Manually pushed products vs. algorithmic ranking
Different campaign settings
Promotion of high-margin products vs. popular products
Why this works: You discover whether commercial choices actually contribute to revenue.
Example 1:
Variant A: Focus on bestsellers in your sorting rules
Variant B: Focus on high-margin products in your sorting rules
Example 2:
Variant A: Manual sorting (product pins or provided product orders from CMS, PIM, etc.)
Variant B: Sorting according to the sorting algorithm
Metrics: Revenue, conversion, average order value (AOV)
3. Filter order and visibility
What do you test?
Order of filters (price, brand, size, etc.)
Expanding or collapsing filters
Number of visible filters
Why this works: Filters determine how quickly visitors find what they are looking for.
Example 1:
Variant A: Price filter at the top
Variant B: Category at the top
Example 2:
Variant A: First 5 filters expanded, the rest collapsed
Variant B: All filters expanded
Example 3:
Variant A: Filter sorting based on AI Smart Filters
Variant B: Manual filter sorting
Metrics: Conversion (and filter usage in the Insights module)
4. Personalization vs. generic content (Merchandising Builder)
What do you test?
Presence of personal components
Type of personalization (e.g., recently viewed vs. recommended products)
Position of personalized content on the page
Why this works: Personalization can increase relevance for the user, but it doesn't always automatically have a positive effect. By testing, you discover whether personalized content actually performs better than generic content and in which position.
Example 1: Personalization vs. generic content
Variant A: Builder with personalized components (e.g., “Just for you" or "Last viewed”)
Variant B: Builder without personalized components
Metrics: Conversion, CTR, revenue
Example 2: Position of personalization
Variant A: Personalization at the top of the page (immediately visible)
Variant B: Personalization lower on the page (after generic content)
Metrics: CR on components, conversion
Best practices for successful A/B tests
To run a reliable test, we recommend the following rules of thumb:
Test one variable at a time: If you change multiple things at once, you won't know what caused the effect.
✔ Good: adjusting only the sorting.
✖ Not as good: adjusting sorting + layout + filters simultaneously.
Ensure sufficient data: A test needs time and volume to be reliable.
Tip: Run tests for at least 3 weeks.
Take peak and off-peak moments into account (weekends, campaigns).
Select high-traffic category(ies).
Determine your KPIs in advance: Choose what “success” looks like beforehand:
Conversion, CTR, Revenue, or Add-to-cart rate.
This prevents you from adjusting your goals afterward based on coincidental results.
Start small and learn from your first tests: Start simple to get a feel for A/B testing and what it can mean for your platform. You don't have to start with large or complex changes immediately.
Dare to learn from “no difference”: Not every test has a clear winner, and that is also valuable information. It means that:
There is no risk involved in the change.
You can shift your focus to other optimizations.
How to get started?
Choose one use case: Start simple, for example, with a sorting rule.
Formulate a hypothesis: For example: "If we sort by popularity, conversion will increase because users see relevant products faster."
Set up the test: Configure the variants in the Tweakwise App.
Analyze: Review the results after 3 weeks and permanently implement the winner.
Conclusion
A/B testing helps you to continuously improve based on actual user data. By starting small and testing structurally, you build a better-performing platform step by step.
Remember: Not every test produces a winner, but every test produces a new insight! 💡