Sina VR 2018-10-09 20:01:06
In the previous & ldquo;most hardcore experience and technical analysis, in-depth comparison of Magic Leap One and HoloLens” article, Karl Guttag, Chief Scientist at Rave, an AR hardware/software company, gave us a comparative analysis of what's seen through the Magic Leap One (ML1) and the HoloLens real-world views through the Magic Leap One (ML1) and HoloLens. For today's post, he's going to look at the ML1's imagery. Here's the editorial details:
Above left is a crop of the original test image, scaled by 200% from the same portion of the image taken through the ML1. Test charts containing a variety of features are a difficult but fair way to judge different aspects of image quality. One-pixel and two-pixel wide features are used to test the resolution of the monitor. Please refer to the end of the article for additional information on how the images were taken.
Most Magic Leap demo content contains brightly colored but small objects that serve both as “visual candy (flashy)” and to hide the lack of color uniformity in the field of view. Faces with skin tones were used in the test chart because humans are more sensitive to the color of their skin. The test chart provides a large solid white object to help identify any color shifts.
I used the Helio Browser to display the images and some of the image resolution issues may be related to the way the Helio Browser scales the image in 3D space. I tried taking test images and displaying them in the ML1 gallery, but the results were not very good. I viewed the same test images with the Hololens browser, which is significantly sharper than the ML1. Sometimes we need to go back and differentiate between browser scaling issues and optical issues, but again, this is how the ML1 normally displays 2D images.
I scrutinized the contents of both ML1s, but neither of them provided sharp images, so I think they represent the image quality of the ML1 to a fair degree. Even if the scaling engine on the ML1 is not very good, the level of flare and chromatic aberration caused by the optics suggests that the ML1 optics are relatively low resolution.
I only tested the “far-focus” mode (about 36 inches away) because it was very difficult to test the near-deep plane focus mode. I felt that the near-focus plane was sharper than the far-focus plane, and the chart in the Magic Leap patent application illustrates the same (see below). The fact that the far focal plane passes through the near focal plane outgoing grating and reaches the eye may be part of the problem. I would have liked to test the near-focus plane as well, but there was no way to zoom in on the test charts, and I don't know how to maintain the headset in near-focus “mode”.
1. ML1 image issues
The image below was taken through the ML1's right eye optic. Honestly, close-up images can show problems you wouldn't normally notice.
Typically, the projected image does not look as good as the direct view display due to defects in the optics, but in the case of the ML1, the diffractive waveguide seems to limit the resolution.
While there are differences between the human eye and the camera in terms of how one “sees” an image, it can still be a good indication of what the eye sees. Cameras are “objective/absolute” in a way that the human visual system is more subjective/adaptive and sensitive to things like brightness and color. The human eye can see artifacts and other issues in a picture.
Overall, the color balance in the center of the image is good. You'll notice a color shift in the two facial skin tones in the test image, but it's not very noticeable until the periphery of the image is 15%.
Problems with ML1 images:
Soft/blurry images: you can see this in the text and in the single and double pixel wide test images. While this softness may be due to the scaling algorithm, the images are quite blurry overall. Despite the ML1 imager's claimed resolution of 1280×960, the effective resolution is only about half that in both directions, or closer to 640×480 in the center of the field of view and lower in the periphery.
Waveguide luminescence (out-of-focus reflections): Although waveguide luminescence is most noticeable around large, bright objects (such as the circles and squares in the test image), it also reduces contrast, which affects the effective resolution of details, such as text.
The color waves in the field of view move with head and eye movement (see the black and white rectangle diagram), and the color consistency in the field of view is relatively poor, which is consistent with all diffractive waveguides I've seen so far.
The blue-green and blue colors are shifted on the left and right sides of the image, again, a common problem with diffractive waveguides. For the right eye, there is a lack of red on the left side and a lack of green and red (with blue) on the right side; the opposite is true for the left eye.
As you move away from the center of the field of view, the brightness decreases. This problem is common for most projection-based displays. The ML1 seems to be better than the Hololens in this regard.
Chromatic aberration: notice the edges of the circle, with a red edge on one side and a blue-green on the other.
Binocular overlap parallax: This is a common problem with stereoscopic headsets and small fields of view. As the image fills the field of view, each eye sees roughly the same image, but with some drift. When you view an image with both eyes, the left eye image will appear truncated on the right side and vice versa. You will see dark areas on each side. For images that appear to be about 4 feet away, I've marked them with orange dashed lines. Solving this problem will further reduce the field of view because they have to maintain a significant percentage of the field of view in the “retention area”.
Cropping the field of view to support internal pupil distance adjustment: based on the way the full-size test image was displayed, I think they retained about 130 horizontal pixels (about 10% of the 1280 horizontal pixels) to support electronic IPD adjustments (note that this is a very indirect measurement and may be inaccurate as I have no control over the source image). the Hololens appears to retain a similar number of Pixels.
9. the diffractive waveguide is trapping light from the real world, which causes the colors to appear “flared”: I've already mentioned this in a previous post.
2. More observations about resolution
In order to see more detail, the camera was pulled in more than twice as close when taking the image on the left, thus providing more than 5 camera samples/ML1 pixels. The iPhone portion of the image below was copied and panned to help better compare it to the ML1 text. the iPhone captured the image showing the text style it should have when the ML1 was able to interpret the image. the ML1 text is more consistently softly blurred. the ML1 image is not as sharp as the HoloLens, or even as sharp as the Lumus' Waveguide. You'll have a hard time seeing one-pixel points and 45-degree lines.
3. Summary
Based on past experience with other diffractive waveguides, I would have expected the device to suffer from color inconsistencies and image flare. However, the colors in the center of the ML1's field of view are quite good.
But I couldn't shake the soft/blurry text. I initially noticed this in the text of the Dr. Grordbort’s Invaders image, and that's exactly why I tested the ML1. I'm not sure to what extent this has to do with the bifocal plane at the moment, but I think it's what makes the ML1 blurrier than the HoloLens.
In the future, I'd like to be able to bypass 3D scaling and drive the display directly to better isolate the optics from the scaling issues. I'm also very curious to see if I can lock the device in “near focal plane mode” and test this mode independently. When testing the ML1, it immediately switches back to far focal plane mode as soon as I take my eyes off the ML1 (which is why I didn't test it in near focal plane mode).
4. Settings for taking test chart pictures
I am using an Olympus OM-D E-M10 Mark III mirrorless camera. I chose this camera specifically because of its size and features. The distance from the center of the lens to the bottom of the camera is less than the distance from the pupil of the eye to the head so that it can be placed within a rigid head unit where the lens sits where the pupil would have been. In portrait mode, it has 3456 pixels wide and 4608 pixels high, which is more than/pixel above the two camera samples of the ML1 1280×960 pixels. The camera comes with 5-axis optical stabilization and this is very helpful for shooting handheld shots.
The ML1's “far focus point” is about 5 feet away (about 1.5 meters). I placed a test image on this site and pulled up the image using the ML1's Helio viewer. I then moved the ML1 headset back and forth until the test image filled the view when the virtual image was located at about 4 feet.
The image above shows the iPhone settings when viewed from an angle. It gives you an idea of where the virtual image is relative to the phone. This photo was taken through the ML1, just with the red annotation added to the back.
I know from other experiments that the ML1's “far focus point” is about 5 feet away. I placed the iPhone 6s in one of the “holes” in the side view. To get the phone and the virtual image in focus at the same time, I placed the phone behind my back and positioned it so that the camera could focus on both the phone and the ML1 image. I then scaled the iPhone display so that ML1 saw the same text size as ML1. By doing this, I could illustrate what the text should look like under the dry resolution test image, and it verified that the camera was able to resolve individual pixels in the test image.
The iPhone's brightness was set to 450 cd/m2 (full daylight brightness) so that it would still be visible after an 85% reduction, so the grid is only about 70 cd/m2. I took the photo in RAW and then white balanced it based on the whites in the center of the ML1 image, which makes the iPhone's display look a bit on the green side. The photo was taken at 1/25th of a second to average out any field order effects.
For reference, the image below is a perspective shot of the ML1 in approximately the same position. For the perspective shot, the exposure can be set separately for the ML1's camera and the test image. In this image, the ML1's camera seems to be focusing on the distant view, which can throw the iPhone out of focus, but you can feel the brightness of the iPhone.
Interestingly, I see some different scaling artifacts in this perspective image, especially the thin black line on the white background that tends to disappear.
The perspective image favors white over black. Note the one-pixel wide feature under “Arial 16 point” the black one-pixel point and line are almost entirely lost, even though the two-pixel wide pixel on the left is almost gone.
from:Envision