Page 1 of 1

How do you approach updating portions of a 2D viewport, which may involve updating the entire viewport, and textures are

Posted: Mon Aug 05, 2019 10:34 am
by Denver77
Perhaps this is not a good use-case for handling with a GPU and textures, and I should just stick to CPU updates. I'm doing my own screen sharing util but without network constraints as it's to be used within the same system.

If a screen is having minimal parts of it updated(like a browser window typing out a message to post on Reddit for example!), then the majority of the image data can be re-used, thus the data to update shouldn't be too expensive to upload to the GPU? I thought of approaching this with tiles/quadtree and doing some binary diffs to know which tiles need to be updated. If some tiles are identical(such as solid colour background) that would also make for lesser overhead in updating(at least via a GPU afaik). Full screen updates via some media playback or game on the other hand is 60 1080p textures a second(assuming all frames are unique), that's around 375MB/sec raw data, which afaik is little in GPU bandwidth over PCIe lanes?

The main advantage in going with a GPU for this scenario afaik is that it'd reduce load on the CPU to render those frames/textures, so it should be useful still.
internet sweepstakes cafe software companies

Re: How do you approach updating portions of a 2D viewport, which may involve updating the entire viewport, and textures

Posted: Mon Aug 05, 2019 12:06 pm
by episoder
wrong forum sections but welp.

how do you share the screen capture? a video stream? well the way to update doesn't really matter. the encoder will take the full frame anyway and compute the delta itself and crunch it. if you capture in general the final image resides on the gpu. depending on which encoder you use:

nvenc: the frame is encoded on the gpu and the (now smaller) bitstream is transferred to ram to process into the file container.
software (and intel quicksync): the full frame is transferred to ram and then encoded and processed into the file container.

in the case of those optimus graphics (2 gpus) the system usually has a own set of the framebuffer in the ram which the igpu renders into. this is where it gets complicated. a pure system frame rendered on the igpu could be captured and encoded in software or quicksync before it get's transferred to the dedicated graphics card for display. rendering a system frame on the igpu and having a windowed game frame using the dedicated graphics, you basicly have a copy of the system frame both in the ram and on the gpu and the game renders into a part of it. means you could capture using nvenc. or the system would copy back the game content in the system ram and insert into and encode the full frame.

that's all the logic there is, i think.