360 Camera Techniques

TECHNIQUE: How to capture a 3D 360 photo with a 2D 360 camera

Yes it is possible to use a 2D 360 camera to capture a 3D 360 photo, although it takes a little work.  Here’s how.
The basic idea is that for you to see a photo in 3D, the left eye has to see the left eye view of the scene, while the right eye has to see the right eye view.  The standard format for 3D 360 photos is the over-under (a.k.a. top-bottom) format.  The equirectangular left eye 360 photo is on top, while the equirectangular right eye 360 photo is on the bottom.  The resulting photo has a 1:1 ratio.

Taking a 3D photo with a non-360 camera is relatively simple to do.  You take a photo from the left eye position, and move a few inches for another photo in the right eye position, or you have two cameras on a bracket.

With a 360 camera, it’s a little trickier.  Suppose you take a 360 photo from the left eye position, then move a few inches for a photo in the right eye position (or you put two 360 cameras on a bracket).  Looking at the front view of the photo, the first camera becomes the left eye and the second camera becomes the right eye.  So far so good.

However, if you turn around to the rear view, the first camera becomes the right eye view, not the left eye view, and the second camera becomes the left eye view, not the right eye view.  It’s for this reason that some have claimed it’s impossible to use a 2D 360 camera to capture a 3D 360 photo.

Actually, it is not so complicated to use a 2D 360 camera to capture a 3D photo.  As we saw, when placing two 360 cameras side by side (or using one 360 camera sequentially) with a bracket or other tool, it is not possible for a single 360 photo to consistently represent the left eye view or the right eye view.  It will be the left eye from one direction, and the right eye from the other direction.  The solution is simple: switch the respective rear views of the two cameras (or shots).

Here is a sample 3D 360 photo from a Ricoh Theta (to view in 3D, you need to use a smartphone and switch to cardboard view):

As you can see there are several limitations.  First, the 3D effect is only while looking forward or to the rear.  As you look to the side, the 3D depth diminishes.  Second, if the shots are not perfectly aligned, viewing it can be stressful for the eyes.  Third, if you want to take photos of anything that is moving, you’ll need two cameras and you’ll need to trigger both cameras simultaneously.  Fourth, you can see the stitch is very rough.  This is because the left view and right view will be at different distances from the subject when facing 90 and 270 degrees.

In any case, if you want to do this, here are the step-by-step instructions:
Step 1.  You need two 2D 360 cameras, or you need to take two separate shots.  Let’s call them Shot A and Shot B.  The cameras (or shots) need to be around 65mm (2.5 inches) apart, the typical distance between our pupils.  Edit: I forgot to mention — I used a cheap dual flash bracket, similar to this.

Step 2.  Eventually, we will put Shot A on top of Shot B, so that the resulting photo is 1:1.  But before we do that, we need to swap the rear view of Shot B into the rear view of the Shot A.  With the standard equirectangular format, it’s not as simple as the front view being on the left side and the rear view being on the right side.  Rather, the front view is in the middle, and it is flanked by half of the rear view on its left side, and the other half of the rear view on its right side.

Rather than swapping the rear views, I found it simpler to swap the front view instead (the middle 50%) and to switch the positions of Shot A and Shot B.  The net effect is the same as swapping the rear view except that there’s only one part of the photo to swap instead of two:The original pattern is this:
Shot A, which consists of  Rear right A – front A – rear left A.
Shot B, which consists of Rear right B – front B – rear left B.The target pattern is this:
Shot A will be changed to Rear right B – front A – rear left B.
Shot B will be changed to Rear right A – front B – rear left A.

As a shortcut, I swap front B and front A:
Shot A becomes: Rear right A – front B – rear left A.
Shot B becomes: Rear right B – front A – rear left B.

Step 3.  Then instead of Shot A being on top, I put it on the bottom. and vice-versa:
Edited Shot B on top: Rear right B – front A – rear left B.
Edited Shot A on the bottom:  Rear right A – front B – rear left A.

As you can see this is now the same as the target pattern above.

One challenge is sharing the 3D 360 photo.  While there are now dozens of 360 photo and video sharing sites, a few of which support 3D 360 videos, I don’t know any that support 3D 360 photos.  However, it is possible to use Google’s VR View to embed a 3D 360 photo on a webpage like I did above.  See here for details (set the is_stereo parameter to true).  Another solution is to put your 3D 360 photo in a 3D 360 video then upload it to a platform that supports 3D 360 videos such as YouTube, LittlStar, or

As you saw, taking a 3D 360 photo with a 2D 360 camera is possible but unwieldy.  Moreover, to take photos of anything that moves, you’ll need two cameras, which drives up the cost.  Finally, we saw that processing the photo will add more steps to your workflow.

It’s for these reasons that I hadn’t used this method to take 3D 360 photos even when I knew how to do it.  Back in March last year, I thought about making an app, but I figured an actual 3D 360 camera would be more practical, and indeed, they’re here.

The Vuze camera ($799; hands-on here) is being released in March, and TwoEyes VR is planned for release in August. Not only are they more convenient, but they can take better quality 3D 360 photos at a cost that is similar or less than the cost of a pair of 360 cameras in most cases (with a few exceptions).  Moreover, they can also take 3D 360 videos, which used to be possible only with multi-thousand dollar rigs.  Now, even consumers will have be able to capture their own 3D 360 videos.

3D VR photos with the Kodak SP360
Realtime 3D 360 videos with StereoStitch
In-depth look at the Vuze Camera, the first affordable 3D 360 camera
TwoEyes VR is the most affordable 3D 360 camera

About the author

Mic Ty


Click here to post a comment

    • Hi In Film. Thanks for commenting. I have never heard of Katsuhiko Inoue. I discovered this process myself but of course this does not mean I'm the first to do it, and I didn't claim that.

      Best regards,

    • Hi In Film. I took these photos and made these experiments in March of 2016. After I made those experiments, I discussed making an app with @360Shane on Instagram. I even reserved a domain name for the app. But I didn't push through with it. I have no idea about Mr. Inoue or when he posted his method.

    • For what it's worth we also independently developed a method that is very similar to this in early 2016. A lot of people started looking into it, it's when VR was very popular and nobody had done & documented anything yet. The foundry had a full suite of tools to edit video shot like this by mid 2016 that definitely didn't come from Inoue's method.

    • The tools from the Foundry are not at all intended for camera geometries like that, sorry that is just wrong. This dual Theta is essentially a stereo-pair rig.

      The Foundry's tools are more specifically geared toward panoramic ring rigs like those I built in 2015 (not that I was the first). The Iliad-8 and Iliad-12 on my blog are the types that CaraVR shines with:

  • Hi In Film, we developed this same method 20 years ago at the company that became iPIX. Of course we were using SG Onyx machines to drive the huge headsets. It's a natural progression when exploring stereograhic viewing. Nice to see that the technology is becoming economically feasible today.

    • You did 2 stereo pairs at only front and back? Such a limiting design, why waste such impressive resources with such a halfway method? You just ignored steroscopy for the side views?

      I suspect isn't wasn't just two stereo pairs pointed opposite directions, it would be so easy to make it so much better especially with the resources you had back then, but if you say so. I can see doing it with Thetas as a quick an easy experiment, but if building something for million dollar SC powered HMDs then I'd think you'd add a few more cameras.

    • Also there were no Thetas back then, so it wasn't the "same method". The previous art I am referring to of Mr. Inoue is with the exact same cameras, it is the exact same method.

    • Hi In Film. I swear I had not even heard of Mr. Inoue until you mentioned him in your comment. Sure I am a member of several FB groups but I don't always read them and in any case like I said I never heard of anything about Mr. Inoue or his method. In any case, like Panoboss said, it's a fairly obvious natural progression. So let's all relax.

    • I don't care about that, I was just speaking to "Panoboss"'s misstatements. Nobody cares about dual Thetas, I pointed out the prior developments and that's good enough for me 🙂 You are all good, thank you for sharing.