What values to use for minimum blancking for a pixel streaming interface

I'm using the pixel streaming interface to analyse some HDL supported blocks, basically tying to get mroe details on timing and the number of clock cycles needed to produce an output, the goal is to get a detailed understanding that would allow me to actually put these as generic equations and attempt to estimate timing for larger inputs.
The block I'm testing now is the Image Filter block, timing and clock cycles are dependent on the values chosen for blanking, I'm unsure about those, is there a rule to pick the minimum values for blanking when you have a custom input? say I have a 4 by 4 input I'm feeding into the filter, what would be the minimum blanking value I'd need to produce a valid output? till now I've been just trying different values until I get valid results.
Any information on what happens inside the Image Filter block would be appreciated too, I'm assuming it used the cotnrol signal to buffer the valid pixels into FIFOs then pop out elements to produce a valid window which is multiplied and accumulated by the filter.

Answers (1)

Typical video interfaces (240p and higher resolution) will have sufficient blanking, so one way to go is to pick blanking requirements from the closest resolution. Blanking intervals are listed in a table in the help page for the Frame To Pixels block.
For the Image Filter, a good back of the envelope calculation is to have the blanking twice the kernel size. based on the Image Filter block documentation, the blanking also needs to be greater than the latency of the block.
You can send the input and output signals of the Image Filter block to the Logic Analyzer to quickly determine the latency.

8 Comments

Thanks for the answer, I have tried Logic analyzer but it does not really offer that much more than straight up simulating the HDL code, what I'm interested in is actually understanding the latency in terms of paramters as in what happens inside the block, how many clock cycles each steps takes.
If you send the input and output signals for the Image Filter block to the Logic Analyzer in Simulink, you can see the latency of block.
There is no way to look inside the block to see the latency within it.
One way to fix the latency to be constant is by using coefficients from the input port in which case the latency is constant since no optimizations are applied.
That's what I tried to do, I used different input sizes and different filter sizes and attempted to find a consistent relation between the input size/timing and output timing but could not, my first assumption was that for the first valid output you'd need the line buffers to store the number of rows associated with generating a valid window (while considering padding), so for a nxn filter you'd first need to store n-1 rows then pop elements from each to generate a window which is then multiplied and accumulated, so there is latency for buffering in data and latency for operating on it, it seems reasonable to look for relatios between the parameters and the latency to better understand the system and design accordingly as opposed to picking configurations and testing them one by one.
Getting the latency for an exact configuration is not as helpful for analysis and research purposes, using the block as a black box does not seem appealing either.
Granted my assumption about the functionality could be wrong, seems like the last resort is going through the generated HDL code for more clues which is not my speciality
Could you please tell the purpose of determining the latency? The pixel control signals will tell you when to use the output of the Image Filer and the Pixel Stream Alsigner block that can help you align two streams.
My work is more research oriented than engineering that is why I wanted to better understand the system since I plan to model the behaviour with a set of equations and highlight how different kernel sizes (and other parameters) ,for instance, can effect the performence exactly.
I suppose the tools are more engineering oriented since most of the innerworkings are not made available. that leaves digging through the generated HDL code and writing testbenches for all the modules for analysis which would be very difficult and time consuming.
it's difficult to define what latency means, so I've attached a model that computes two different kinds of latency - the number of steps between vStart for the input and output of Image Filter, and the number of steps between hStart for the input and output of Image Filter.
For the model shown with 240p video, we have 402 steps in each line. Given this and the default image filter coefficient, you can see that we compute 22 steps between hStart In and hStart Out and 424 steps between vStart In and vStart Out.
Hope this helps.
Thank you for your help, what I mean by latency is simply exact timing, harware systems are deterministic, If I have a design that crries out 2D convolutions on an input I should know how many cycles it takes exctly for the input to be stored into line buffers or FIFOs and how many cycles it takes for it to be processed, multiplied and accumulated, granted this is usually done on the VHDL end of things using test benches but HDL coder presents a new model based design approach, so I was trying to figure that out from the model. so basically for thise 22 and 424 numbers I wated to know where they come each time.
In that case, I think it is best if you run the HDL code in the HDL simulator to see the latency. The line buffer code is in a separte entity so if you look at the vStart and processData coming out of the module in a waveform viewer, it's pretty easy to track down the latency.

This question is closed.

Products

Release

R2019a

Asked:

on 21 May 2019

Closed:

on 20 Aug 2021

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!