A Dense Optical Flow Field Estimation with Variational Refinement
Main Article Content
Abstract
Optical flow has long been a focus of research study in computer vision community. Researchers have established extensive work to solve the optical flow estimation. Among the published works, a notable work using variational energy minimization has been a baseline of optical flow estimation for a long time. Variational optical flow optimization solves an approximate global minimum in a well-defined nonlinear Markov Energy formulation. It works by first linearizing the energy model and uses a numerical method specifically successive over-relaxation (SOR) method to solve the resulting linear model. An initialization scheme is required for optical flow field in this iterative optimization method. In the original work, a zero initialization is proposed and it works well on the various environments with photometric and geometric distortion. In this work, we have experimented with different flow field initialization scheme under various environment setting. We found out that variational refinement with a good initial flow estimate using state-of-art optical flow algorithms can further improve its accuracy performance.
Article Details
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
References
[2] B. K. Horn and B. G. Schunck, “Determining Optical Flow,” Artificial Intelligence, vol. 17, no. 1-3, pp. 185-203, 1981. [3] C. Zach, T. Pock and H. Bischof, “A Duality based Approach for Realtime TV-L 1 Optical Flow,” in Joint Pattern Recognition Symp., Springer, Berlin, pp. 214-223, 2007.
[4] J. Y. Bouguet, “Pyramidal Implementation of The Affine Lucas Kanade Feature Tracker Description of The Algorithm,” Intel Corporation, vol. 5, no. 1-10, 4, 2001.
[5] T. Brox, A. Bruhn, N. Papenberg and J. Weickert, “High Accuracy Optical Flow Estimation based on A Theory for Warping,” in European Conf. on Computer Vision, Springer, Berlin, pp. 25-36 2004.
[6] D. G. Lowe, “Object Recognition from Local Scale-invariant Features,” in Proc. of the Int. Conf. on Computer Vision, vol. 99, no. 2, pp. 1150-1157, 1999.
[7] H. Bay, A. Ess, T. Tuytelaars and L. Van Gool, “Speeded-up Robust Features (SURF),” Computer Vision and Image Understanding, vol. 110, no. 3, pp. 346-359, 2008.
[8] E. Tola, V. Lepetit and P. Fua, “Daisy: An Efficient Dense Descriptor Applied to Wide-baseline Stereo,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 32, no. 5, pp. 815-830, 2009.
[9] C. Liu, J. Yuen and A. Torralba, “Sift flow: Dense Correspondence Across Scenes and Its Applications,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 33, no. 5, pp. 978-994, 2010.
[10] C. Tomasi and R. Manduchi, “Bilateral Filtering for Gray and Color Images,” in Proc. of the Int. Conf. on Computer Vision, vol. 98, no. 1, pp. 839-846, 1998.
[11] K. He, J. Sun and X. Tang, “Guided Image Filtering,” in European Conf. on Computer Vision, pp. 1-14, Springer, Berlin, 2010.
[12] M. J. Black and P. Anandan, “The Robust Estimation of Multiple Motions: Parametric and Piecewise-smooth Flow Fields,” Computer Vision and Image Understanding, vol. 63, no. 1, pp. 75-104, 1996.
[13] T. Brox and J. Malik, “Large Displacement Optical Flow: Descriptor Matching in Variational Motion Estimation,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 33, no. 3, pp. 500-513, 2010.
[14] C. Vogel, K. Schindler and S. Roth, “Piecewise Rigid Scene Flow,” in Proc. of the IEEE Int. Conf. on Computer Vision, pp. 1377-1384, 2013.
[15] Y. LeCun, L. Bottou, Y. Bengio and P. Haffner, “Gradient-based Learning Applied to Document Recognition,” Proc. of the IEEE, vol. 86, no. 11, pp. 2278-2324, 1998.
[16] A. Krizhevsky, L. Sutskever and G. E. Hinton, “Imagenet Classification with Deep Convolutional Neural Networks,” in Advances in Neural Information Processing Systems, pp. 1097-1105, 2012.
[17] K. Simonyan and A. Zisserman, “Very Deep Convolutional Networks for Large-scale Image Recognition,” Computer Vision and Pattern Recognition, preprint arXiv:1409.1556, 2014.
[18] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov and A. Rabinovich, “Going Deeper with Convolutions,” in Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, pp. 1-9, 2015.
[19] K. He, X. Zhang, S. Ren and J. Sun, “Deep Residual Learning for Image Recognition,” in Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, pp. 770-778, 2016.
[20] P. Weinzaepfel, J. Revaud, Z. Harchaoui and C. Schmid, “DeepFlow: Large Displacement Optical Flow with Deep Matching,” in Proc. of the IEEE Int. Conf. on Computer Vision, pp. 1385-1392, 2013.
[21] W. C. Ma, S. Wang, R. Hu, Y. Xiong and R. Urtasun, “Deep Rigid Instance Scene Flow,” Computer Vision and Pattern Recognition, preprint arXiv:1904.08913, 2019.
[22] M. Menze and A. Geiger, “Object Scene Flow for Autonomous Vehicles,” in Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, pp. 3061-3070, 2015.
[23] E. Ilg, N. Mayer, T. Saikia, M. Keuper, A. Dosovitskiy and T. Brox, “Flownet 2.0: Evolution of Optical Flow Estimation with Deep Networks,” in Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, pp. 2462-2470, 2017.
[24] K. Mikolajczyk, T. Tuytelaars, C. Schmid, A. Zisserman, J. Matas, F. Schaffalitzky and L. Van Gool, “A Comparison of Affine Region Detectors,” Int. J. of Computer Vision, vol. 65, no. 1-2, pp. 43-72, 2005.