Hello! It may not quite be spring yet in Korea, but the sharp edge of winter seems to have softened. I hope spring is finding its way to your city as well.

Today, I’m sharing a brief look at the process behind a short Instagram video. The images were created by using the previous style codes in a slightly different way. I then used those images as start and end frames to generate a simple AI video.

There isn’t really a “method” to speak of—once you begin using video tools, the process becomes fairly intuitive. Still, if you’re someone who creates images but hasn’t yet turned them into video, this may be a helpful reference.

Daaleelab by Naaveelab

Little Window — sref 704510779, 4080710704

Little Window — sref 4080710704

DaaleelaB by NaaveelaB

My works Little Window and Little Suitcase are connected. When generating images, I often remix one from the other in Midjourney—creating a suitcase from a window, for example. As a result, some images feel clearly like a window (as in the first Little Window), while others seem to sit somewhere between a window and a suitcase (as in the second). In the first Little Window, I combined the two style codes I previously used separately in the last newsletter, and I find the result more satisfying.

When generating video, I don’t use text-to-video at all. I always work with image-to-video. You only need one or two images—sometimes more. For this piece, I used two. The keyframe images can be chosen according to your own preference. Even longer videos are often constructed by connecting several short clips generated in this way.

When working with two keyframes, a common approach is to choose a wide shot and a close-up. That was my original plan when preparing last week’s Instagram video. However, I didn’t have a strong pair of images that clearly functioned as wide and close perspectives. I ended up selecting two Little Window images—loosely corresponding to distance and proximity—and the result felt unexpectedly right, so I uploaded it as it was.

It might have been even better if I had adjusted the color tone of one of the images slightly (these days it’s quite easy with Nano Banana). Still, the scene I had imagined—“a window entering the girl’s head as a cloud head begins to grow”—came to life. Including the wide and close variations, I connected three short clips in total.

DaaleelaB by NaaveelaB

Kling interface via Freepik — image-to-video generation

Little Window — sref 704510779, 4080710704

DaaleelaB by NaaveelaB

In summary: prepare two images and generate the video using an AI tool such as Kling (upload, prompt, generate—that’s it). It can also be helpful to experiment with multiple tools through services like Freepik, where you can access Kling and others. I generated this particular video using Kling via Freepik.

A small point: your start and end frames do not need to be similar or visually consistent. In fact, if you want a more dramatic transformation, they shouldn’t be too similar. Rather than asking which keyframes you should choose, it’s better to select them according to the video you’re imagining.

Personally (though it’s difficult to explain precisely), I tend to choose two images that feel sensorially connected but not logically sequential—similar in atmosphere, but not bound by narrative order. That said, it always depends on the work. For a cinematic piece, you might choose frames based on narrative progression. For a decorative display—say, a pattern video shown in a department store—you might pair floral and fish motifs. In any case, once the tool generates a video from the two images, I watch the result and adjust the images themselves, rather than simply changing the prompt. Sometimes that leads to entirely new directions.

In another Instagram post that I’ve scheduled, I used keyframe images adjusted with Nano Banana. Next time, I’ll share the process behind those images and the resulting video.

Reply

Avatar

or to participate

Keep Reading