Volume rendering in radiology: some new videos
category: general [glöplog]
@psonice: preop planning is exactly what I think would be one of the major benefits...
@navis: how many tesla does the MRI you´re working with have? We have a 3 Tesla in standard operation and a 7 tesla for research purposes (hard to get time - like in the old mainframe days ;-)) and let me tell you, the detail one gets from those machines could probably spare us dissecting corpses in anatomy in the future (yes, we do that in medicine). And I can second your feelings on viewing MRIs in MPR, but who says it has to stay that way?
@navis: how many tesla does the MRI you´re working with have? We have a 3 Tesla in standard operation and a 7 tesla for research purposes (hard to get time - like in the old mainframe days ;-)) and let me tell you, the detail one gets from those machines could probably spare us dissecting corpses in anatomy in the future (yes, we do that in medicine). And I can second your feelings on viewing MRIs in MPR, but who says it has to stay that way?
Navis: that doesn't look so much as AO, no matter if it is or not.
In any case, in the edges of objects, where your ray is entering the feet, you have a clear way to have darker areas.
Supposing the range is in [0..1023], the average color for skin/soft texture is, let say, 500, but your thresold is, let say, on 300, you are going to have 300 to 500 aliasing, so, it depends on the angle your ray is entering you get a different color, isn't it?
Also, this effect is going to be much heavier if you use tricubic interpolation and/or if the skin have a different tone and/or if the dataset is blurry or not.
I think the best way to visualize this is with 2d views of the CT, not in rendering, that might be confusing.
In any case, in the edges of objects, where your ray is entering the feet, you have a clear way to have darker areas.
Supposing the range is in [0..1023], the average color for skin/soft texture is, let say, 500, but your thresold is, let say, on 300, you are going to have 300 to 500 aliasing, so, it depends on the angle your ray is entering you get a different color, isn't it?
Also, this effect is going to be much heavier if you use tricubic interpolation and/or if the skin have a different tone and/or if the dataset is blurry or not.
I think the best way to visualize this is with 2d views of the CT, not in rendering, that might be confusing.
mct: I used to work on a clinical 1.5, now waiting for the new 3T which will be up and running by January. Im really really excited in the prospect of working with high quality MRI! I have a couple of projects on tongue cancer running that could benefit from the increased SNR. And I'll check them out with my application, I wonder if they can match CT for presentation of anatomy/pathology.
texel: ah, ok... fair enough, regardless of it being fake AO or not, I think it is still a quite okish extra depth cue.Ill have to try ssao in the future.
I also wonder, for anyone who has ever done this, what is the state of art considering 2D transfer function designers. The one standard solution that I have seen i n papers seems too cumbersome and confusing (trapezoids of color and transparencies overlapped on a 2d surface). Maybe there is a nice solution involving the third dimension (surface)?
texel: ah, ok... fair enough, regardless of it being fake AO or not, I think it is still a quite okish extra depth cue.Ill have to try ssao in the future.
I also wonder, for anyone who has ever done this, what is the state of art considering 2D transfer function designers. The one standard solution that I have seen i n papers seems too cumbersome and confusing (trapezoids of color and transparencies overlapped on a 2d surface). Maybe there is a nice solution involving the third dimension (surface)?
Personally I'd say that foot does look kind of AO-ish, but it'd be very cheap-n-nasty SSAO. I've had SSAO look like this when experimenting and it's not been tweaked quite enough (using the "blur Z, then subtract it from the image" type method with too small a blur size). It should look much better with a decent SSAO effect, and much better with any kind of real AO.
I think as a depth cue though, you're right, it's OK. But does it look as good when you're looking internally and peeling away the layers? It was when I saw some of this on the video I thought it could do with more depth cueing. But maybe that's just youtube's video quality.
I think as a depth cue though, you're right, it's OK. But does it look as good when you're looking internally and peeling away the layers? It was when I saw some of this on the video I thought it could do with more depth cueing. But maybe that's just youtube's video quality.
texel wrote:
Kinda drifting off topic here, but...
My mother recently had a cute little tumour removed via keyhole surgery. Everything was done using non-autonomous robots, including incisions and final stitch-up. She was essentially untouched by human hand.
As this type of surgery becomes more common, simulating it via software could be a good training tool. "Real touch" is still important, but it's fast becoming secondary.
Quote:
Well, years ago I talked to some of my physician friends about it. They all told me that they need to feel the real touch of things to be something useful for practices.
Kinda drifting off topic here, but...
My mother recently had a cute little tumour removed via keyhole surgery. Everything was done using non-autonomous robots, including incisions and final stitch-up. She was essentially untouched by human hand.
As this type of surgery becomes more common, simulating it via software could be a good training tool. "Real touch" is still important, but it's fast becoming secondary.
Now it's my turn to be shocked:
7T in a medical facility?! I know MRIs require high-B-field magnets, but there aren't even magnets in the Tevatron, the RHIC or the LHC that are as strong as that! Of course, I shouldn't be surprised: the higher the field the better NMR.. sorry.. "MRI" works ;)
Quote:
and a 7 tesla
7T in a medical facility?! I know MRIs require high-B-field magnets, but there aren't even magnets in the Tevatron, the RHIC or the LHC that are as strong as that! Of course, I shouldn't be surprised: the higher the field the better NMR.. sorry.. "MRI" works ;)
@t-zero: check out our little piece of hardware:
http://www.dkfz.de/de/presse/pressekonferenzen/download/Pressemappe_7-Tesla_sel.pdf
BTW: Jülich even has a 9,4 Tesla!
http://www.dkfz.de/de/presse/pressekonferenzen/download/Pressemappe_7-Tesla_sel.pdf
BTW: Jülich even has a 9,4 Tesla!
I have once seen a 7T volume, but it was of a small mouse. I think you get that sort of magnetic fields on smaller coils.
t-zero> You have even higher magnetic fields in NMR spectroscopy. Two years ago I did some experiments on a 700 MHz spectrometer, which corresponds to 16.4 T. And it wasn't the biggest one in the lab. Granted, sample tubes do not take as much space as organic stuff so the coil is way smaller.
Speaking of AO:
http://www.youtube.com/watch?v=2AJdZtKebWg
It is SSAO - still working on it ;-). The banding effect is partly due to the sampling rate (small, to get it run realtime) and to my depth calculations which I'm experimenting with anyway...
http://www.youtube.com/watch?v=2AJdZtKebWg
It is SSAO - still working on it ;-). The banding effect is partly due to the sampling rate (small, to get it run realtime) and to my depth calculations which I'm experimenting with anyway...
Much better :)
heh, solved the banding issue without any speed sacrifices. I'll have a nicer video when I get back from the pub :#)
Looking very nice... the subtle attenuation of depth and detail you get with AO seems to work very well with cluttered CT/MRI datasets - the colonoscopy dataset is paticularly impressive.
Have you considered some kind of volumetric equivalent? I guess that's somewhat indulgent but it'd be totally awesome if it were possible.
Have you considered some kind of volumetric equivalent? I guess that's somewhat indulgent but it'd be totally awesome if it were possible.
Do you plan to implement/feature 3D mouses like from 3DConnexion? I use one for Google Earth and it makes navigating through complex models very easy and intuitive.
http://www.youtube.com/watch?v=avVREc5Pgvo
http://www.youtube.com/watch?v=avVREc5Pgvo
Salinga: I don't think so, unless there is commercial demand (which I doubt). I guess integration would be very straightforward anyway...
Alienus: what do you mean olumetric equivalent? To do AO on non-opaque voxels?
Alienus: what do you mean olumetric equivalent? To do AO on non-opaque voxels?
Well, I'm not sure about AO - just doing something about the lack of depth which seems inherent in all medical voxel renderings (although the lighting in ambivu's volumetric renderer is above average from what I've seen).
I guess if you generated surface data based on the current pallete you could use that as a basis for some frankenstien-esque volumetric AO or other shading algorithm. I've no idea if that's feasible though :D
I guess if you generated surface data based on the current pallete you could use that as a basis for some frankenstien-esque volumetric AO or other shading algorithm. I've no idea if that's feasible though :D
These are my latest attempts with the AO. I still have lots of improvements in my head
http://www.youtube.com/watch?v=7KdFtd1JmYU
http://www.youtube.com/watch?v=uWQz4vD-4p8
Scary. Nice direction, BTW.
Is there some Z-fighting at some spots? I mean are you rendering polygons at all??
this is no geometry rendering, but rather raymarching through voxelspace.
what looks like z-fighting is basically aliasing: sampling rate (stepping of raymarching) is not sufficiently small in order to capture every crossing of the threshold in question. It is easy to fix that (just decrease step, which would decrease framerate too).
what looks like z-fighting is basically aliasing: sampling rate (stepping of raymarching) is not sufficiently small in order to capture every crossing of the threshold in question. It is easy to fix that (just decrease step, which would decrease framerate too).
Very good Navis! What I don't like so much is that the material is too shiny now... but it is just a personal dislike.
I hope you very good luck with this project
I hope you very good luck with this project
Navis,
Can I ask you what is Ambivu? Does it is your project? Or are you working for them or what?
I don't know why I thought it was an open source project by yourself, but now I see it is a commercial project.
Thanks
Can I ask you what is Ambivu? Does it is your project? Or are you working for them or what?
I don't know why I thought it was an open source project by yourself, but now I see it is a commercial project.
Thanks
texel: I agree, I'll reduce shininess, or rather make it more 'diffused'.
ambivu is a commercial project, and is made between three of us (we've been doing similar things as a job for a while now). For pretty much all uses outside a hospital it is free without any 'wait 1 min' or splash screens or any of that shit. It is not open source (but that's something that might change soon)- I haven't really worked on an open source project before and can't say I'm tempted to do so in the future (either for a demo or an application. ).
ambivu is a commercial project, and is made between three of us (we've been doing similar things as a job for a while now). For pretty much all uses outside a hospital it is free without any 'wait 1 min' or splash screens or any of that shit. It is not open source (but that's something that might change soon)- I haven't really worked on an open source project before and can't say I'm tempted to do so in the future (either for a demo or an application. ).
If it's voxel data, how about a simple min/max octree? that way the raymarcher wouldn't miss anything and could work with in log(n) instead of n for each pixel.