Product Engineering Lab

Build the right f*ckin' thing

You have a product idea your team can't stop arguing about. I'll build the cheapest possible test, get real data, and tell you if it's worth building for real. If it is, I'll build that too.

Portrait of Peppe Silletti

About Peppe Silletti

I've been inside enough product teams to see the same pattern over and over. Smart people, good intentions, and a pile of features nobody validated before building.

One pattern keeps showing up: The teams that win don't ship the fastest. They learn the fastest. The bottleneck is never code. It's the gap between "we think users want this" and "we actually checked."

I got tired of writing about the problem. So I started fixing it directly by combining product thinking with engineering execution. Just experiments and answers.

Be honest

You're guessing. And it's expensive.

Your team ships features based on gut feelings and whoever argues loudest in the roadmap meeting. Sometimes they're right. Usually they're not, and you figure it out three months and €80k too late.

You overbuild

Three sprints deep into a feature nobody asked for. PM wrote the spec. Engineering built it. Users don't give a shit.

You underbuild

Endless surveys and interviews but zero working software generating real data. You've got opinions. Not evidence.

You build the right thing wrong

The insight existed, but the people who understand users and the people who write code don't even talk to each other.

You don't build at all

Another "discovery phase" that produced a 40-slide deck and zero learning. Your competitor shipped something ugly, learned in a week, and moved on.

One question. One experiment. No bullshit.

I don't do roadmaps. I do experiments. You bring me a product question you can't answer with opinions, and I turn it into evidence. Fast, lean, and built in your actual codebase.

  1. Frame the hypothesis

    You tell me what's keeping you up at night. I turn your vague anxiety into something we can actually test: "We believe [this change] will cause [this outcome] for [these users], and we'll know because [this metric moves]." No more arguing in circles.

    First 1-2 days → Async deep-dive + hypothesis doc
  2. Design the smallest experiment

    What's the cheapest, fastest way to get a real answer? A prototype. A feature flag. A fake door. I don't care what it looks like — I care that it generates signal. Maximum learning, minimum spend.

    Next → Experiment brief with success criteria + timeline estimate
  3. Build and instrument

    I build actual working code in your stack. Not a mockup. Not a slide deck. Instrumented to capture the behavioral data that tells you if your hypothesis is right or wrong. This is where having someone who thinks product and writes code becomes an unfair advantage.

    Build time varies → from a few days to a couple of weeks
  4. Read the results, deliver the insight

    I tell you what happened. Not in a 40-page report nobody reads. A short, honest brief: here's what the data says, here's what I'd do, here's what we still don't know. Then you decide.

    Once results are in → Experiment report + recommendation
  5. Build it together optional

    It worked? Good. I'm already in your codebase, I already understand the problem, the data, the users. I'll either build the production feature myself or pair with your product team to ship it.

    Scope and timeline based on experiment findings

What this actually looks like

HYPOTHETICAL EXAMPLE
The problem: Only 34% of new users complete onboarding and nobody can agree on why. PMs say the flow is too long. Engineers blame performance. The designer wants to burn it all down and start over. Six weeks of debate. Zero data. Classic.

We framed

"Users who reach step 3 of onboarding drop off at 3x the rate of other steps. We believe step 3 asks for information users don't have yet, causing friction."

I built

Instrumented the existing flow with step-level tracking, added a variant that let users skip step 3 and complete it later, and measured completion rate, time-to-activation, and 30-day retention for both paths.

We learned

Users who skipped step 3 had 2x higher 30-day retention. It wasn't the flow length. It wasn't performance. It was asking for billing info before users saw any value. The team fixed one step instead of rebuilding everything. Weeks of arguing, resolved with one experiment.

Book a free call

Want results like this? Let's talk about your hypothesis.

What it costs

Pick your format. Same rigour either way.

Single Experiment

One hypothesis. One cycle. One answer. Good for when you have a specific burning question and want to stop guessing.

Price: €3,500 / per cycle

  • One full experiment cycle
  • Hypothesis framing session
  • Working code + instrumentation
  • Experiment report + recommendation
  • One round of follow-up questions

+ Build it together

The experiment proved something worth building? I'm already inside your codebase. I'll build the feature or pair with your engineers to ship it. No re-onboarding. No context lost. Scoped and priced after the experiment so nobody's guessing.

Price: Custom / scoped per feature

  • Solo build or pairing with your engineers
  • Built on validated experiment findings
  • Knowledge transfer baked in
  • Fixed scope, fixed price

Not sure which? Book a 20-minute call. Worst case you leave with a sharper hypothesis to test on your own. No pitch deck. No follow-up sequence.

Everything you're probably wondering

What exactly is an "experiment cycle"?

We pick one hypothesis. I build the smallest possible test to validate it, instrument it to capture real user behavior, and hand you a clear recommendation: ship it, iterate, or kill it. One cycle, one decision. That's it.

What tech stack do you work with?

Yours. Your repo, your CI, your infrastructure. I don't build throwaway prototypes in a sandbox. The experiment code is production-grade from day one, so when the results say "ship it," you actually can.

How long does a cycle take?

Depends on the experiment. Some wrap in days, others need more time to collect real data. I'll give you a straight timeline during framing so nobody's surprised.

Do I need a dedicated team to work with you?

Nope. I work async via Slack. Whether you've got a full product team or just two engineers and a dream, I'll fit in. No mandatory standups. No ceremonies.

What if the experiment fails?

Then it did exactly what it was supposed to. A "failed" experiment means you just saved months of effort and budget building something nobody wanted. You get a clear report on what happened and what to do next. That's not failure — that's the whole point.

Can you just build features without the experiment part?

No. The experiment always comes first. If you already know exactly what to build and just need someone to code it, hire a freelancer. I'm here for when you're not sure yet.

What happens after we finish a cycle?

Up to you. Ship the validated feature with the "Build it together" add-on. Run another experiment on a different question. Or hand the findings and code to your team and take it from there. No lock-in.

How is this different from hiring a freelancer?

A freelancer builds what you ask for. I start by asking "should we build this at all?" You get product thinking and engineering execution in the same person. I question the brief before writing the first line of code.

Stop arguing about it.
Go test it.

Your team has opinions. Everybody does. I'll help you turn them into evidence before you waste another quarter.

Book a free call

No pitch deck. No sales funnel. Just a conversation between two people who care about building the right thing.