Physical First, Digital Second: Why Unplugged Activities Make Screen Time Work Better
How hands-on play builds the cognitive scaffolding that makes coding and screen time actually stick for toddlers
My son loves cherry tomatoes. He picks which one I cut next, and I slice it up for him. It’s a snack ritual.
The other day, he stopped eating them and started lining them up instead. Yellow ones here. Orange ones there. Red. Then the weird reddish-brown ones that look like they can’t decide what they are. He made a rainbow across the counter, completely unprompted.
A few days later, we sat down and built a sorting game on the computer. Emoji animals and emoji vehicles appear on screen, and you drag them to either a garage or a grassy field. He got it instantly — he’d already done the hard cognitive work of sorting with tomatoes, and many other things before that. The screen version was just a new surface for something he already understood.
Physical first, digital second; that way, they’ve something to relate the latter to.
Why Physical Matters
Young kids learn through their bodies first.
Piaget’s stages of cognitive development place children under seven in the preoperational stage, where thinking is tied to concrete, tangible experience. Abstract reasoning — the kind screens demand — doesn’t fully develop until much later. Children at this age think by doing. They need to touch, move, sort, stack, and break things to build mental models.
Montessori figured this out over a century ago: concrete before abstract. Let children manipulate real objects until the concept lives in their hands, then introduce the symbolic version. Research in embodied cognition backs this up — physical manipulation creates sensorimotor traces that anchor learning in ways that flat visual input alone doesn’t.
When my son sorted tomatoes by color, he was making decisions. This one’s orange, not red. This one’s in between — where does it go? That’s where the cognitive exercise is occurring. The sorting game on the computer just gave him a new context to exercise the same skill.
Screens are visual and auditory only. That’s fine for adults who have decades of physical experience to draw on. But for a three-year-old still building those mental models, starting on a screen is like reading the manual before seeing the tool. You’ve got a thin concept of it without real texture or heft.
I’m not anti-screen. My son learned to read with an iPad app, and we build browser games together for fun. But I’ve noticed a clear pattern: when we do a physical version of a concept first, the digital version lands faster, sticks better, and is way more fun for both of us.
The Pattern
1. Physical exploration. Hands-on, no screens. The concept shows up through play.
2. Connection. Talk about what just happened. “You sorted the tomatoes by color — what other ways could we sort them?”
3. Digital creation. Build something on the computer that uses the same concept. “Want to make a sorting game?”
4. Play. Actually play with what you built. The kid sees their physical understanding reflected on screen.
The bridge between steps 2 and 3 is where the magic happens. When my son sits down at the computer after sorting tomatoes, he’s not encountering “sorting” for the first time. He’s recognizing it. “Oh, this is like the tomatoes!” The concept transfers.
Real Examples
Here’s how this plays out with different computational thinking concepts:
Sequencing
Physical: Steps for washing the car. First you rinse it. Then soap. Then scrub. Then rinse again. Order matters — soap without water does nothing.
Digital: We made a car wash game where tools appear on screen (water, soap, sponge) and you click them in the right order to wash the car. He already knew the sequence from doing it real life. The game just let him practice it on repeat without the running water.
Patterns
Physical: I point these out to him everywhere. Stripes on a crosswalk. Alternating fence posts. The rhythm of windshield wipers. Once you start noticing patterns, a three-year-old will not let you stop.
Digital: We made a pattern prediction game using images of his favorite airplanes. A sequence appears — Airbus Beluga, Super Guppy, Airbus Beluga, Super Guppy — and he picks what comes next. He was already pattern-hunting in the wild, the airplane game just made it even more interesting.
Loops
Physical: Cleaning up his toys. Pick up a block, put it in the bin. Pick up a block, put it in the bin. Same action, repeated until done. That’s a loop.
Digital: A maze game where you move a character forward by repeatedly pressing the arrow key. The loop concept already had a physical anchor from cleanup time – continue until complete.
Cause and Effect
Physical: Matchbox cars on a ramp. Place the car at the top, let go, it rolls down. Line up wood blocks like dominoes and knock them over. Every action has a visible, immediate consequence.
Digital: This is every game we’ve ever made. Click a button, something happens. Change a number, something changes. But the ramps and the blocks came first, and that’s why cause-and-effect on screen already makes sense to him.
What the Research Actually Says
Most developer parents skip straight to the screen — we live there, we can explain sorting abstractly, why bother with tomatoes? Because we’re not three. We have decades of physical experience backing every abstract concept we encounter. A toddler doesn’t have that yet. And the research explains why it matters.
Piaget’s concrete operational framework established that children under seven learn through direct manipulation of their environment — they literally cannot reason abstractly yet. Their thinking is bound to what they can see and touch.
Research on embodied cognition in children shows that physical manipulation creates sensorimotor memory traces that persist and transfer to new contexts. When a child sorts objects with their hands, they’re not just learning “sorting” — they’re building neural pathways that activate again when they encounter sorting in a different form.
A 2017 study on embodied math learning found that physical manipulation of objects before abstract representation improved both understanding and transfer — but only when the physical activity was directly connected to the concept. Random hands-on play didn’t help. Intentional physical exploration of the same idea they’d later encounter digitally did.
This is why the pattern matters. It’s not “play outside then do screen time.” It’s “explore this specific concept physically, then build this specific concept digitally.” The connection between the two is everything.
The Permission to Be Low-Tech
There’s a weird pressure in the developer parent community to start kids on technology as early as possible. As if your professional identity depends on your toddler being tech-forward.
Your three-year-old sorting cherry tomatoes on a cutting board isn’t falling behind, they’re building the cognitive scaffolding that will make every future digital experience meaningful instead of superficial.
No need to rush past the physical parts. The screens will be there later, but the tomatoes won’t keep.
This essay is part of the thinking behind 12 Weeks of Tech Projects to Build With Your Kid — a hands-on curriculum for ages 2-6 that pairs physical activities with AI-assisted game building. No screens required for most of it.



