I went down a rabbit-hole of using more complicated optical flow techniques. My friend Tyler Henry on using OpenFrameworks for his project so I consulted with him on an approach:
For pre-recorded video use OpenCv:
1. get ofxCv add-on if you don’t have it. it’s where all the magic happens.
Note: always include the ofxOpenCv add-on (comes with oF) in your project when you use ofxCv
(ofxOpenCv includes the OpenCv lib, which ofxCv needs)
2. blob tracking:
ofxCv has a nice implementation.
Check out the example projects: example-contours-color and example-contours-tracking.
Combining those will work for you to: a) get blobs by color and b) ID each blob and track it, so you can keep track of which blob is which - only necessary if you have multiple blobs (i.e. if 2 objects in your video are 35C).
3. optical flow
Process the video using Farneback flow.
Either try ofxCv’s example-flow project (you need Kyle M’s ofxControlPanel add-on for it to run).
Or try my version: optFlowTest
Tyler's version adds some processing to get the average flow, he reversed the flow (because ofxCv marks the flow as going towards the previous video frame - his version has flow going forward to the current frame).
a. blob detection for your target objects
b. farneback optical flow on the whole scene -> spits out a vector<ofVec2f> with magnitude/direction of flow per pixel
that vector<ofVec2f> is your flow field
c. for loop through the vector<ofVec2f> and compare the pixel position of each flow point to the pixels positions in your blob. Set every point in the flow field that’s not in the blob to (0,0), i.e. no flow. Now you have just the flow inside your blobs.
d. throw particles on your blobs, and tell them to follow the flow of the pixel they’re at. They should move with your blobs, according to the flow of the video. you could add some randomness to their movement to make them more particle-y. But when they get to a point that has no flow, they should either stop or bounce backwards, back into the flow field.
Farneback flow might ignore smooth (i.e no image texture) parts of your video. For that reason you might want to use the non-heat-mapped version of your video, since the heat colors would probably reduce texture. You could also try sharpening the video.
After step c. you might want to save the flow field as a csv (or json?) file, since calculating the flow is really expensive. That way, you first calculate the flow and then use the calculated values later. Each row in the csv would correspond to a frame of video (keep track of the frames using ofVideoPlayer.getCurrentFrame() because it might skip frames). Each two columns in the csv would contain ofVec2f.x and ofVec2f.y data for each pixel in your video.
Also try scaling the video down to something like 320x240 if you have issues with calculating the flow. Farneback flow will choke on hi-res video.