Pages: 1 [2] 3 4 5 6 7 8 9 :: one page |
|
Author |
Thread Statistics | Show CCP posts - 17 post(s) |
|
HPA Illuminator
State War Academy Caldari State
10
|
Posted - 2016.03.11 07:07:25 -
[31] - Quote
Nevyn Auscent wrote:http://i.imgur.com/tkkrx53.png Try and work that one out for a laugh.
LOL! It's either imaged on top of the cell (= wrong focus), or it's a dying cell. If everything looked like that, it's clearly an "unidentifiable". |
|
|
HPA Illuminator
State War Academy Caldari State
10
|
Posted - 2016.03.11 07:12:13 -
[32] - Quote
Van Dracon wrote:The new feature has caused many inconsistent results. Especialy with basic findings where you are 100% accurate but get a failed result. This has baffled me if I should continue wasting my time in project failure.
For starters the pictures should be increased in size instead of looking through a key hole.
Can we get 3d images if possible.
Why did I fail with one of the results. Please give back users more explanatory information to educate them more on failed results instead of guessing.
I like to help like many others though I believe the new feature needs to be expanded with more features. At this stage until I see improvement I won't be committed to it as I once thought.
I think (not sure) that there's a forum thread that's better for giving feedback regarding the UX/UI. I'll ask ppl to have a look in this forum too though.
3D images is not possible atm as we've so far only acquired 2D images. The reason is that taking 3D just takes too much time (atm 1 image takes around 3s by the microscope, just the actual acquisition, if it were to be 3D it would be a minute or so... that times 150k samples = no doable, unfortunately. It would be awesome to have though!). |
|
HPA Illuminator
State War Academy Caldari State
10
|
Posted - 2016.03.11 07:12:41 -
[33] - Quote
Annemariela Antonela wrote:
Woop woop! |
Sp3ktr3
Caldari Provisions Caldari State
2
|
Posted - 2016.03.11 08:01:44 -
[34] - Quote
Annemariela Antonela wrote:
Well I do science all day, and then come home to fly spaceships and blow stuff up and end up doing science again. It never ends!! |
Van Dracon
Tesla Aerospace Industries
3
|
Posted - 2016.03.11 08:56:52 -
[35] - Quote
HPA Illuminator wrote:Van Dracon wrote:The new feature has caused many inconsistent results. Especialy with basic findings where you are 100% accurate but get a failed result. This has baffled me if I should continue wasting my time in project failure.
For starters the pictures should be increased in size instead of looking through a key hole.
Can we get 3d images if possible.
Why did I fail with one of the results. Please give back users more explanatory information to educate them more on failed results instead of guessing.
I like to help like many others though I believe the new feature needs to be expanded with more features. At this stage until I see improvement I won't be committed to it as I once thought. I think (not sure) that there's a forum thread that's better for giving feedback regarding the UX/UI. I'll ask ppl to have a look in this forum too though. 3D images is not possible atm as we've so far only acquired 2D images. The reason is that taking 3D just takes too much time (atm 1 image takes around 3s by the microscope, just the actual acquisition, if it were to be 3D it would be a minute or so... that times 150k samples = no doable, unfortunately. It would be awesome to have though!).
Hey thanks for the reply, Just to tell you something about 3D it would be awesome. Let me explain my views on 2D and 3D. Now for e.g. we have all these green dots supposedly only in the blue area. Ok with 2D you assume there all inside when you look at the photo. But with 3D it can uncover more detail for e.g. at times 3D shots can tell you if all those green dots are floating above the blue area or if they are literally sitting in the blue area. This could be one major factor eliminating errors and being more accurate right ?
Even though it takes 1 minute wouldn't it be more important to be more accurate ?, I would importantly like to be more accurate rather than not especially with this application. I dont want to sound like a smart ass but accuracy is prime. Maybe your software does it already but reports back in 2D i'm not sure maybe. Or is there other applications that can speed up the time for processing images quicker ?. I think over the long run you would have more accurate results if 3D is the latest technology we use today. E.g. the new 3D laser printers.
Anyways thats just my own thoughts. |
HPA Illuminator
State War Academy Caldari State
14
|
Posted - 2016.03.11 10:28:11 -
[36] - Quote
Van Dracon wrote:HPA Illuminator wrote:Van Dracon wrote:The new feature has caused many inconsistent results. Especialy with basic findings where you are 100% accurate but get a failed result. This has baffled me if I should continue wasting my time in project failure.
For starters the pictures should be increased in size instead of looking through a key hole.
Can we get 3d images if possible.
Why did I fail with one of the results. Please give back users more explanatory information to educate them more on failed results instead of guessing.
I like to help like many others though I believe the new feature needs to be expanded with more features. At this stage until I see improvement I won't be committed to it as I once thought. I think (not sure) that there's a forum thread that's better for giving feedback regarding the UX/UI. I'll ask ppl to have a look in this forum too though. 3D images is not possible atm as we've so far only acquired 2D images. The reason is that taking 3D just takes too much time (atm 1 image takes around 3s by the microscope, just the actual acquisition, if it were to be 3D it would be a minute or so... that times 150k samples = no doable, unfortunately. It would be awesome to have though!). Hey thanks for the reply, Just to tell you something about 3D it would be awesome. Let me explain my views on 2D and 3D. Now for e.g. we have all these green dots supposedly only in the blue area. Ok with 2D you assume there all inside when you look at the photo. But with 3D it can uncover more detail for e.g. at times 3D shots can tell you if all those green dots are floating above the blue area or if they are literally sitting in the blue area. This could be one major factor eliminating errors and being more accurate right ? Even though it takes 1 minute wouldn't it be more important to be more accurate ?, I would importantly like to be more accurate rather than not especially with this application. I dont want to sound like a smart ass but accuracy is prime. Maybe your software does it already but reports back in 2D i'm not sure maybe. Or is there other applications that can speed up the time for processing images quicker ?. I think over the long run you would have more accurate results if 3D is the latest technology we use today. E.g. the new 3D laser printers. Anyways thats just my own thoughts.
Thanks for elaborating, it's interesting to hear what ppl think! I agree that accuracy is really important. NB that each images is of a focal plane/narrow slice of the cell, so for something to be seen from "outside" of the slice, it has to be a reaaaally strong staining, or the image is acquired at the interface between nucleus/cytoplasm.
What we do now is rather than acquiring 3D images, if something can't be visualized perfectly in one image, we will take several images of the same cells, but at different focal planes. That way we can vizualise e.g. focal adhesions in one image, and in the next we will show e.g. nuclear speckles. But yes, it would be ideal to have high-res 3D images to scroll through.
If you see spots in the blue area, and when toggling on/off the red, you see a nice outline of the cell w/o any red in the blue parts - then you know that the image you look at have been acuired "in the middle" of the cell. Thus, the spots in the blue should be there, rather then floating below/on top. Also, the look of the spots can usually tell whether they are in focus or not. If they are big (subjective, I know) and somewhat blurry, they aren't in focus, and you should be hesitant. If more distinct, they are in the same focal plane as the nuclei, and you can trust them.
Technical comment:
No, we acquire 2D images (Leica SP5 confocal microscopes). We have the possibility of acquiring 3D images in the form of stacks, i.e. that the mic first images a slice at the bottom and then works its way up to the top of the cell, slice by slice (the number of slices can be set manually). The reason for using this kind of microscopy is that it gives high-resolution images, and we can look at a certain focal plane at the time (compared to a microscope that images the whole cell at the time, but at much worse resolution). High-res images are really necessary to see substructures such as e.g. the centrosome or fibrillar center. Each focal plane (slice) is very narrow, so (as I mentioned above) for something to be seen from below/above, the signal from it either has to be really strong, or the slice has to be in close proximity to e.g. the interface between the nucleus and cytoplasm.
The reason that it's so time consuming to do image acquisition is that the microscope scans with 1 laser over the sample, then the 2nd laser and finally the 3rd (exciting the fluorophores in the sample at different wavelengths. We have to excite separately to avoid bleedthrough between different channels, eg that staining in the blue is seen in green etc).
Also, the resolution in z will not be as good as the one in xy 2D (general thing due to some laws of physics/how light move through different media or smt that I would have to refresh my memories on before commenting on), which means that even if we do 3D images, they won't be perfect.
Sorry for nerding out on microscopes :) |
Van Dracon
Tesla Aerospace Industries
3
|
Posted - 2016.03.11 11:27:57 -
[37] - Quote
I really enjoyed reading your reply. Didn't realize how many different cameras there are and how much light can or can't get through. I understand about the slice, i guess that is very accurate in some way. Well speaking on behalf of me and probably many others i think it's going to take some time to get use to. I don't come from a scientist background. Worked in I.T. for some years and was just looking at it from an application point of view.
I spent many years in software automation. Would be nice to have recognition filters so instead of you selecting what images come close to the examined photo you could possibly setup filter options for it instead. E.g. if you see more than 5 green dots in the blue area have that selected instead of you selecting it on the right hand side of the pane. I guess since the photos are static we can't setup such filters cause a photo is a photo.
Because you work with colors is there a way we can calculate how much red/ green, blue light is in the photo. Why am i asking this ?. Again having filters setup based on light ratios based on %. So if there 60% green light in the photo which is it likely to be or be as close the 3 category on the right side of the pane.
Again why am i talking about colors, there are so many results not sure for e.g. 5 or less green dots in blue area have same green light frequency light as the others that full in the same basket. Or is most of this color thing I am talking about irrelevant.
Just my thoughts.
Once again thanks for sharing |
HPA Illuminator
State War Academy Caldari State
16
|
Posted - 2016.03.11 13:00:26 -
[38] - Quote
Van Dracon wrote: I really enjoyed reading your reply. Didn't realize how many different cameras there are and how much light can or can't get through. I understand about the slice, i guess that is very accurate in some way. Well speaking on behalf of me and probably many others i think it's going to take some time to get use to. I don't come from a scientist background. Worked in I.T. for some years and was just looking at it from an application point of view.
I spent many years in software automation. Would be nice to have recognition filters so instead of you selecting what images come close to the examined photo you could possibly setup filter options for it instead. E.g. if you see more than 5 green dots in the blue area have that selected instead of you selecting it on the right hand side of the pane. I guess since the photos are static we can't setup such filters cause a photo is a photo.
Because you work with colors is there a way we can calculate how much red/ green, blue light is in the photo. Why am i asking this ?. Again having filters setup based on light ratios based on %. So if there 60% green light in the photo which is it likely to be or be as close the 3 category on the right side of the pane.
Again why am i talking about colors, there are so many results not sure for e.g. 5 or less green dots in blue area have same green light frequency light as the others that full in the same basket. Or is most of this color thing I am talking about irrelevant.
Just my thoughts.
Once again thanks for sharing
I'll actually ask my colleague HPA_Dichroic to give some input on this, as he's working with image analysis and could comment so much better.
Thanks for discussing & coming with ideas, love the interest from everyone! |
HPA Dichroic
Polaris Corporation
15
|
Posted - 2016.03.11 16:02:18 -
[39] - Quote
HPA Illuminator wrote:Van Dracon wrote: I really enjoyed reading your reply. Didn't realize how many different cameras there are and how much light can or can't get through. I understand about the slice, i guess that is very accurate in some way. Well speaking on behalf of me and probably many others i think it's going to take some time to get use to. I don't come from a scientist background. Worked in I.T. for some years and was just looking at it from an application point of view.
I spent many years in software automation. Would be nice to have recognition filters so instead of you selecting what images come close to the examined photo you could possibly setup filter options for it instead. E.g. if you see more than 5 green dots in the blue area have that selected instead of you selecting it on the right hand side of the pane. I guess since the photos are static we can't setup such filters cause a photo is a photo.
Because you work with colors is there a way we can calculate how much red/ green, blue light is in the photo. Why am i asking this ?. Again having filters setup based on light ratios based on %. So if there 60% green light in the photo which is it likely to be or be as close the 3 category on the right side of the pane.
Again why am i talking about colors, there are so many results not sure for e.g. 5 or less green dots in blue area have same green light frequency light as the others that full in the same basket. Or is most of this color thing I am talking about irrelevant.
Just my thoughts.
Once again thanks for sharing
I'll actually ask my colleague HPA_Dichroic to give some input on this, as he's working with image analysis and could comment so much better. Thanks for discussing & coming with ideas, love the interest from everyone!
Hey Van Dracon, I'm not 100% sure what you're talking about here with the recognition filters. Do you mean have some sort of semi-automated classification? We could certainly provide a suggested classification based on some basic image features, but where would the fun in that be
I am working on implementing a fully automated system for this task with my student that will use neural networks to recognize these sub-cellular patterns. Ideally this system will answer most of the images and give a confidence score so that in the future we only have to review the least confident classifications. The system is supervised though so needs lots of quality training data which is where you come in.
Maybe you can explain more about your idea and I'll try to understand it better |
Helios Anduath
Signal Cartel EvE-Scout Enclave
113
|
Posted - 2016.03.11 16:27:19 -
[40] - Quote
A quick question on the cell-to-cell variation option, how much does it play into the weighting of your accuracy change if you tick cell-to-cell variation but others haven't, or if you don't tick it and others have? Is it treated just like the other answers?
The reason I am asking is that a lot of the time, this box doesn't seem to be being ticked by the majority for some obvious variation so the consensus has it at 0. I guess this is also even more subjective than the other classifications as it comes down to how much variation justifies ticking the option. |
|
Beta Maoye
103
|
Posted - 2016.03.11 17:20:24 -
[41] - Quote
I think the classification of this image is wrong. It is not Nucleus. It should be Mitochondria. |
SurrenderMonkey
Space Llama Industries
2197
|
Posted - 2016.03.11 17:38:16 -
[42] - Quote
Helios Anduath wrote:A quick question on the cell-to-cell variation option, how much does it play into the weighting of your accuracy change if you tick cell-to-cell variation but others haven't, or if you don't tick it and others have? Is it treated just like the other answers?
The reason I am asking is that a lot of the time, this box doesn't seem to be being ticked by the majority for some obvious variation so the consensus has it at 0. I guess this is also even more subjective than the other classifications as it comes down to how much variation justifies ticking the option.
Yea, I always feel like a chump clicking cell-to-cell variations, even when it is obviously the case. I just do it anyway.
"Help, I'm bored with missions!"
http://swiftandbitter.com/eve/wtd/
|
Nosum Hseebnrido
Interregnum.
6
|
Posted - 2016.03.11 18:04:58 -
[43] - Quote
http://imgur.com/9JrEWrq
So 92% think that small green dots are only in blue area, and 38% vice versa - everyone wins but not me.
http://eveboard.com/pilot/Nosum_Hseebnrido
|
Memphis Baas
1316
|
Posted - 2016.03.11 18:15:07 -
[44] - Quote
With that one, I'd have chosen the 92%, and there's cellular wall stuff going on because in the center left-ish area if you turn off the red and the blue you can still see the shapes of the cells.
I think I figured out the 92%: you switch to just green and blue, and if you can still see the potato with holes shape with the blue turned off and just green showing, then that's the 92% choice, otherwise if you just see the shape of the nucleus but not the darker holes, then it's the 7% choice.
Silvenin wrote:For some reason people seem to be afraid of the "unspecific" or "cell to cell variations" buttons like they bite or something. With "unspecific" you can't tell, but in the case of "cell to cell variations" I believe that you can't just pick that, you have to also pick the pattern that varies, or the 2-3 different patterns that you see. It's like:
"Which of the following patterns do you see here?" "Yes." |
HPA Illuminator
State War Academy Caldari State
17
|
Posted - 2016.03.11 18:50:41 -
[45] - Quote
Helios Anduath wrote:A quick question on the cell-to-cell variation option, how much does it play into the weighting of your accuracy change if you tick cell-to-cell variation but others haven't, or if you don't tick it and others have? Is it treated just like the other answers?
The reason I am asking is that a lot of the time, this box doesn't seem to be being ticked by the majority for some obvious variation so the consensus has it at 0. I guess this is also even more subjective than the other classifications as it comes down to how much variation justifies ticking the option.
Good question and I don't know. Have forwarded it so hopefully someone more knowledgeable will answer soon! |
Helios Anduath
Signal Cartel EvE-Scout Enclave
113
|
Posted - 2016.03.11 18:52:59 -
[46] - Quote
Nosum Hseebnrido wrote:http://imgur.com/9JrEWrq So 92% think that small green dots are only in blue area, and 38% vice versa - everyone wins but not me .
There can be multiple classifications for one cell, so you can have, for example, selections from the nucleus and cytoplasm sections just like you had in the tutorials. Anything that should be mutually exclusive excludes you from making any conflicting choices.
With that image, the staining is more concentrated in the nucleus and you can see "holes" in it that line up with the "holes" in the blue so the 92% are correct.
The staining in the cytoplasm (red bit) could be background staining or could be something else - hard to say without being able to change colour channels.
Personally, I would be considering selecting cell-to-cell variation as well due to the different intensities present.
In any case, it is not unidentifiable because there is clear differentiation in staining across the cell and there are clearly identifiable features. |
HPA Illuminator
State War Academy Caldari State
17
|
Posted - 2016.03.11 19:02:38 -
[47] - Quote
Beta Maoye wrote:I think the classification of this image is wrong. It is not Nucleus. It should be Mitochondria.
Absolutely correct. I will fwd it. |
Circumstantial Evidence
264
|
Posted - 2016.03.11 21:52:50 -
[48] - Quote
Sample #100054449
Blue - Green - G+B
I feel like a rebel... or a pioneer. Cytoplasm (80%), sure... but can everyone miss small spots "overlapping with holes in the blue marker"?
I had trouble tagging with "nucleus" (80%) because staining intensity seems at the same level both in and outside the nucleus. I regret not tagging "nuclear membrane" - its kind of faint, didn't notice until now.
|
Memphis Baas
1323
|
Posted - 2016.03.11 23:16:35 -
[49] - Quote
The sample images are specific in one way: with the green removed, they only have 3 zones:
- black (outside the cell) - red (inside the cell, outside the nucleus) - blue (inside the nucleus)
Graphics devs could probably code this basic pixel counting:
- blur the red+blue channel images to create fuzzy splotches of color to outline the nuclear and cell body areas - map the black, red, and blue zones - overlay the green channel, and calculate what percentage of the green falls within the black, red, blue (look at each green pixel's neighbors) - attach the percentage to each sample: "30% of the green in this sample is in the nucleus, 60% in the cell body, 10% outside the cell. Make your selections appropriately."
- the Project Discovery server can then eliminate or flag the obviously wrong samples: "Hey you guys had a 100% consensus of nucleoli, but 0% of the green in this sample is in the nucleus, what gives?" Simple comparison of the pixel count percentages vs. our choice percentages can flag wrong answers, maybe even give a "level of accuracy" estimate for each sample.
|
Elyia Suze Nagala
Republic Military School Minmatar Republic
87
|
Posted - 2016.03.11 23:48:30 -
[50] - Quote
Nevyn Auscent wrote:http://i.imgur.com/tkkrx53.png Try and work that one out for a laugh.
That's the Jove nebula 0.216 secs after supernova from a distance of 15.764 LYs with optical enhancements. |
|
Nankeen Heron
Jim's Mowing
20
|
Posted - 2016.03.12 00:31:01 -
[51] - Quote
Nevyn Auscent wrote:http://i.imgur.com/tkkrx53.png Try and work that one out for a laugh.
Seeker spore.
For those like me that are struggling to tell the difference between some of the structures, there are more hi-res examples here: http://www.proteinatlas.org/learn/dictionary/cell |
Van Dracon
Tesla Aerospace Industries
3
|
Posted - 2016.03.12 02:07:58 -
[52] - Quote
HPA Dichroic wrote:HPA Illuminator wrote:Van Dracon wrote: I really enjoyed reading your reply. Didn't realize how many different cameras there are and how much light can or can't get through. I understand about the slice, i guess that is very accurate in some way. Well speaking on behalf of me and probably many others i think it's going to take some time to get use to. I don't come from a scientist background. Worked in I.T. for some years and was just looking at it from an application point of view.
I spent many years in software automation. Would be nice to have recognition filters so instead of you selecting what images come close to the examined photo you could possibly setup filter options for it instead. E.g. if you see more than 5 green dots in the blue area have that selected instead of you selecting it on the right hand side of the pane. I guess since the photos are static we can't setup such filters cause a photo is a photo.
Because you work with colors is there a way we can calculate how much red/ green, blue light is in the photo. Why am i asking this ?. Again having filters setup based on light ratios based on %. So if there 60% green light in the photo which is it likely to be or be as close the 3 category on the right side of the pane.
Again why am i talking about colors, there are so many results not sure for e.g. 5 or less green dots in blue area have same green light frequency light as the others that full in the same basket. Or is most of this color thing I am talking about irrelevant.
Just my thoughts.
Once again thanks for sharing
I'll actually ask my colleague HPA_Dichroic to give some input on this, as he's working with image analysis and could comment so much better. Thanks for discussing & coming with ideas, love the interest from everyone! Hey Van Dracon, I'm not 100% sure what you're talking about here with the recognition filters. Do you mean have some sort of semi-automated classification? We could certainly provide a suggested classification based on some basic image features, but where would the fun in that be I am working on implementing a fully automated system for this task with my student that will use neural networks to recognize these sub-cellular patterns. Ideally this system will answer most of the images and give a confidence score so that in the future we only have to review the least confident classifications. The system is supervised though so needs lots of quality training data which is where you come in. Maybe you can explain more about your idea and I'll try to understand it better
Hi Dichroic,
Thanks for the feedback, interesting stuff -). Yeah with the recognition filters what i was thinking was areas where for e.g. you have a straight line the software measuring that. Or if there are circles that can be measured. Measuring patterns or pattern recognition. This also calculates the light or colors, so for e.g. we have a circle that is recognised and we also measure the light. Thats why i asked before do specific patters have a set ratio based on color light scheme they fall in.
I'm probably going way off track with the game here. As you know military jets scan on heat, so when they fire a rocket it's going towards the heated area. Thats just an e.g. I dont want to come up with something that makes the game boring. Just trying to help. Yes it will eventually come down to the user filtering it out. The computers can only give us the scanned patterns or the most matched and then we decide.
So my dream feature in this game probably asking too much was going to be, can we have a filter that the cpu does most of the calculations to around say 80% and then the remaining 20% be given to the human to make judgement on. What we are given so far is color tools. We dont have tools to use to measure or work with patterns even those color tools do show some patterns in a way. Yet some basic photos i've worked with have been wrong because of patterns and not colors. From what i see you think you get it spot on because the colors are accurate with the supported e.g. photo in game.
So far we have talked about patterns and light. Have we talked about sound vibrations -) does sound affect these cells differently when they are affected by sound -). That's one form of pattern recognition only if they all react differently to it for e.g. circles react differently to straight lines when sound or vibration is applied. Anyways too much chit chat from me.
The best references for light measurements and temperatures would military oh and let's not forget nasa. Nasa as you know use specific measureing tools to measure pluto's surface. Also xray the surface. Just a thought. Anways thank you for the info. Will see how things progress so far it's fun and new.
Many thanks. |
HPA Illuminator
State War Academy Caldari State
23
|
Posted - 2016.03.12 05:55:16 -
[53] - Quote
Circumstantial Evidence wrote:Sample #100054449 Blue - Green - G+BI feel like a rebel... or a pioneer. Cytoplasm (80%), sure... but can everyone miss small spots "overlapping with holes in the blue marker"? I had trouble tagging with "nucleus" (80%) because staining intensity seems at the same level both in and outside the nucleus. I regret not tagging "nuclear membrane" - its kind of faint, didn't notice until now.
Agree that the fibrillar center are clearly visible (not sure why so many would go for nucleoplasm here?).
I'm not sure that it's a nuclear staining or just the nuclear membrane... But I'm leaning towards both being correct.
On side note: I'm in the phone so just opened the first picture and was zooming and looking really close to the screen, being like... What green staining is he talking about... Is there such a difference looking at a cell phone screen compared to the computer one? Hehe my excuse being it's 6.50 am and I just woke up. |
Lulu Lunette
ThinkTank Phoenix TOG - The Older Gamers Alliance
311
|
Posted - 2016.03.12 08:11:21 -
[54] - Quote
I started off really bad. Even though I was trying, button mashing would have probably gotten better results!!
But I clawed out of a 40% accuracy at level 12 and back at 50% at level 20. Feels good. I want that armor lol
@lunettelulu7
|
Beta Maoye
105
|
Posted - 2016.03.12 21:05:22 -
[55] - Quote
In this classification result, I realized I was wrong about Cytoplasm for so many cases. |
Galaxxis
Caldari Provisions Caldari State
4
|
Posted - 2016.03.12 21:08:33 -
[56] - Quote
I need to get an image hosting thing, I've had some really interesting ones. One looked like a nucleus popped and all the goo came out! It was glowing bright green and there was a black spot where the nucleus should have been. |
|
HPA Illuminator
State War Academy Caldari State
25
|
Posted - 2016.03.12 21:33:39 -
[57] - Quote
Beta Maoye wrote:In this classification result, I realized I was wrong about Cytoplasm for so many cases.
Not sure what you mean? Have added it to our list of possible errors, will double check whether it should have the cell-to-cell variation as a class. Will get back! |
|
HPA Illuminator
State War Academy Caldari State
25
|
Posted - 2016.03.12 21:34:04 -
[58] - Quote
Galaxxis wrote:I need to get an image hosting thing, I've had some really interesting ones. One looked like a nucleus popped and all the goo came out! It was glowing bright green and there was a black spot where the nucleus should have been.
Would love to see that one! :) |
Gilbaron
Free-Space-Ranger Northern Coalition.
1901
|
Posted - 2016.03.12 21:44:59 -
[59] - Quote
https://i.gyazo.com/54efcad7e2f4666f6ad66988a2e17ceb.png |
Nick Kanjus
Lone Star Warriors Yulai Federation
9
|
Posted - 2016.03.12 21:55:25 -
[60] - Quote
I'm a bit confused about this one. I might be totally wrong but when I read the description of cytoplasm it says:
Seen throughout all the whole cell, except in the nucleus (blue marker). The intensity can vary throughout the cell, and is often stronger close to the nucleus.
in other words: all the red might be green but the blue is blue without a spec of green in it.
Now I got image 100096917 (as in the screenshot here: http://i.imgur.com/cVv5kwc.png ). Basically its green all over so I figured the sample was useless. But to my surprise 50% match was on cytoplasm. Am I misunderstanding cytoplasm and reject my samples to soon. Or did 50% of the people not read the first line of the description? |
|
|
|
|
Pages: 1 [2] 3 4 5 6 7 8 9 :: one page |
First page | Previous page | Next page | Last page |