Patch interlance
#11
I'm sorry again I got another idea.
How about use SFR(Split Frame Rendering) wich mean each frame (odd or even) rendered at same time. Again i just wondering Smile ( i know it sound silly cos i know nothing about programming)
Reply

Sponsored links

#12
Bump up
Reply
#13
Bump up
Reply
#14
Beware billyash, these bumps may cause problems to you. People is not answering mainly because of that persistence already. That's because some of problems you experiment are just unknown for most, it is happening in your rig but you should not take it is so to everybody.

The deinterlace methods mostly do exactly this in your proposition, they reconstruct the whole frame from the fields (half frame each). And go further allowing to have the bottom field first (bff) or the top field first (tff).
Imagination is where we are truly real
Reply
#15
I'm sorry cos I'm dying to know if is it could be done or not? Smile

(05-04-2013, 03:16 PM)nosisab Ken Keleh Wrote: The deinterlace methods mostly do exactly this in your proposition, they reconstruct the whole frame from the fields (half frame each). And go further allowing to have the bottom field first (bff) or the top field first (tff).
is it mean two frame (even and odd) drawn same time?
Reply
#16
(05-04-2013, 03:20 PM)billyash Wrote: I'm sorry cos I'm dying to know if is it could be done or not? Smile

is it mean two frame (even and odd) drawn same time?

yes, one the methods is called blend for that exact reason, it blends the two fields into a complete frame before sending it to output (but all them do it in a way or another).

PS: Was discussed what are those fields before, let me try it in a maybe simpler way once again.

The idea behind interlace is to gain bandwidth, so the method send only half a complete frame (called field) and then send the second half as the second field.

The point is the two aren't half of the same image (not taken at same time I mean). The first half is first half of the first frame and the second is the second half of the second frame. Because the way the TV works, this works too because the eyes see the FPS as was being sent at double the number of frames.

But since each field is part of different frames the juxtaposition is not perfect, of course, and that's the reason for the choice of what field to get first since it may affect the remixing. But since it depends on the transmitter as much as depend on the receiver, things are a bit more complicate.

But in the end is always the same problem, the two fields aren't half of the same frame and this pose a greater problem for recording and/or presenting in progressive media. A complex problem involving a lot of specific methods and professional specialized giants of the media industry... and by itself beyond the scope of emulation.

PPS: Those deinterlace methods were not developed for the emulator, they existed already and are used in movies and all sort of interlaced videos. Don't even think is easy to find better ways to do the job... least yet methods to do the job without taking a huge performance toll.
Imagination is where we are truly real
Reply
#17
How about delay first field then drawn alltogather on second field when second field is uploaded? Is it possible? ( ' ',)
Reply
#18
(05-04-2013, 04:36 PM)billyash Wrote: How about delay first field then drawn alltogather on second field when second field is uploaded? Is it possible? ( ' ',)

Billy, what I mean is that's not simple as you seem to believe, ever trying such thing would be removing devs doing more urgent and priority work.

Think how much giants of media, including Sony did not invest already to solve the problem, those methods are used by those giants also and were developed by them.

That will be the last time I repeat this same thing: You are asking the devs to stop their current development line to persecute chimeras and issues which aren't even "common" to most people.

PS: In a mocking way (maybe not far from truth), following this line you will end asking the devs to better the very video card drivers Smile (since already asking them to better methods developed and used by the whole media presentation industry). To say the least, the devs could stop the emulator's development at once and go for these newer projects you have been proposing and believe me... they are almost so complex as the emulator itself, if not by the size at least due to the fundamental nature.
Imagination is where we are truly real
Reply
#19
I didn't mean to ask or anything. I just give idea and ask question, if dev accept it and they working on it . I call it bonus, but if they won't ,I won't persistance cos im none here. Beside I need answer wich could enrich my knowledge
PS: people getting fool if they stop asking question
Reply
#20
(05-04-2013, 05:06 PM)billyash Wrote: I didn't mean to ask or anything. I just give idea and ask question, if dev accept it and they working on it . I call it bonus, but if they won't ,I won't persistance cos im none here. Beside I need answer wich could enrich my knowledge
PS: people getting fool if they stop asking question

I know, what other reason I'm answering? But in the end is that lack (for now) of understanding on the complexities involved what makes things looking simpler they are.

You see, developing a deinterlace method means a proper project by its own merits. And if one is found which would be better performing and having greater quality than the already existent, this could be ever monetary gain, with cause for patent and attracting the attention of that media producers. One could get rich!!! ... the problem is exactly it's far from simple finding such method if one exist at all.

I mean, people developing those methods had specific goals in mind. They don't mean one is "better" than the other, they are specific to attend needs.

One method is meant to be easy on performance, allowing real time conversion at cost of quality, another method focus on quality without worrying about performance, this last is not meant to real time presentation most of the times with current hardware, the process at slow time at almost perfect quality but very slowly and them having that quality at normal speed at "run time".

One method could scan the frames and intelligently correcting every detail but doing it in a image reconstruction method which could take seconds or even more to render a single frame.

So, one method is better for real time conversion, another is meant for high quality and cares not for the slowness. Just that the two things are mutually exclusive, then very few people are getting rich nowadays with new deinterlace methods, because those in existence are already optimized for what they are meant.

Besides, what you told is what Blend already does. Edit: Ahh, I see what you mean, no way billy, the information is already coded at 30 FPS (60 fields), increasing the pace they come or are processed would not change this. Where a meaningful attempt to increase the actual FPS would be like that which VirginKLM proposed to do with Kingdom hearts II (iirc), which involves patching "that specific game" to increase the original FPS pace... and without surprise has sync issues and in a way is meant more to making videos than to actually play in current hardware at correct "speed" (not FPS... SPEED).

Same speed + greater FPS = smooth
Same FPS + smaller speed = Slugg!!! (that's what happens with too much VU cycle stealing).
Imagination is where we are truly real
Reply




Users browsing this thread: 1 Guest(s)