Image-to-video generation, which aims to generate a video starting from a given reference image, has drawn great attention. Existing methods try to extend pre-trained text-guided image diffusion models to image-guided video generation models. Nevertheless, these methods often result in either low fidelity or flickering over time due to their limitation to shallow image guidance and poor temporal consistency. To tackle these problems, we propose a high-fidelity image-to-video generation method by devising a frame retention branch based on a pre-trained video diffusion model, named DreamVideo. Instead of integrating the reference image into the diffusion process at a semantic level, our DreamVideo perceives the reference image via convolution layers and concatenates the features with the noisy latents as model input. By this means, the details of the reference image can be preserved to the greatest extent. In addition, by incorporating double-condition classifier-free guidance, a single image can be directed to videos of different actions by providing varying prompt texts. This has significant implications for controllable video generation and holds broad application prospects. We conduct comprehensive experiments on the public dataset, and both quantitative and qualitative results indicate that our method outperforms the state-of-the-art method. Especially for fidelity, our model has a powerful image retention ability and delivers the best results in UCF101 compared to other image-to-video models to our best knowledge. Also, precise control can be achieved by giving different text prompts. Further details and comprehensive results of our model will be presented here.
The architecture of our DreamVideo model consists of two main components: the primary U-Net block and the Image Retention block. Modules marked with flame symbols indicate that they are trainable. A reference image is processed by a convolution block and concatenated with the representation of noisy latents. The Image Retention Module, as a side branch copying from the downsample blocks of U-Net, plays a role for maintaining the visual details from the input image and meanwhile also accepting text prompts for motion control.
It can be observed that an increase in GS substantially intensifies the brightness and contrast of the videos generated. This indicates a trend within the generated results, progressively shifting towards the image domain.
Our model consistently presents the lowest Fréchet Video Distance (FVD) scores on both datasets. This proves that our model can generate video with better continuity. Regarding the Inception Score (IS), our model's IS score is higher than that of other methods, which means that the videos generated by DreamVideo has the best quality. As for the FFFCLIP metric, this is derived by employing clips to calculate the similarity between the initial frames of the generated and original videos. However, we believe that this metric may not fully represent the quality of the generated initial frame. The visual features encoded in the clip images are coarse and focus more on the presence of objects in the image rather than the quality of the initial frame. FFFSSIM demonstrate that our model's ability to generate video with high image retention.
The comparison of videos generated by I2VGen-XL, VideoCraft1, and DreamVideo. The videos produced utilizing our method closely resemble the source images, while the videos produced by I2VGen-XL and VideoCraft1 are significantly weaker in color, position and object retention than our method.