Photos

Create a 360-degree VR panorama of the night sky


Creating 360 VR panoramas (also known as 360×180 degree panoramas) is a fun side challenge for photographers, but over the past few years it has simplified to the point of phones, for example. like the Google Pixel line, make VR panorama photography truly a point-and-shoot business. But capturing one of the night skies remains a worthy challenge.

On Google Pixel phones, one simply selects the “Photo Sphere” option, presses the button to start taking pictures, and then points the camera at designated spots on the screen. Once you’ve covered all the specified points, the phone’s computer automatically stitches the image into a rectangular, 2:1 equivalent VR panorama. The result is viewable in Google Photos as a photo. interactive where you can pan in any direction (including up and down) and zoom in on details.

While these quick Photo Sphere panoramas are surprisingly good, there are a few limitations. First, stitching may not be accurate if objects are close to the camera. If you’ve taken panoramas before (not necessarily full 360 degrees), you’ll know that parallax errors occur if the panorama was not captured with a camera that rotates around the nodal point of the lens. This is especially hard to do with mobile phones, especially if you have to observe and stare at the screen behind the lens.

The second problem is that at night, the exposure needs to be many seconds long (at least), during which time the earth rotates making the stars appear to move relative to the horizon. While Google’s Pixel cameras have an Astrophotography mode that is impressively capable of single still images (at a rate of several minutes per shot), the mode is not applicable to panoramas. Sphere.

Night panorama challenge

Even for a modern DSLR or mirrorless camera, 360 VR panoramas at night are a challenge. While it is possible to perform a day panorama with care by hand, the long exposure required for a night sky panorama requires a tripod. As with any 360 VR panorama, if the subjects in the foreground are part of the scene (certainly if the ground is included), a special panoramic head is required.

The second big problem to deal with is the movement of the sky. This makes it necessary to take pictures that make up the whole picture in the shortest time possible. On individual frames, the exposure should be short enough to keep the stars from following (shorter exposure for longer focal length), Between frames, there should be minimal delay to avoid The motion of the stars will cause registration problems if the ground is also visible in the frame.

Lens selection

With the challenges mentioned above, the choice of lens (and camera) becomes important. While you can see many night sky scenes online taken with medium to long focal lengths, beyond the sky-to-ground look, they aren’t too difficult to stitch into many image processing programs.

Using an ultra-wide lens or a fast fisheye lens is better to reduce the time it takes to capture all of the panoramic composition shots, but these lenses often have severe aberrations at the edges. Fortunately for us, edge problems can be solved with enough overlap to use only the central parts of the frame.

Camera selection

It is clear that the camera is an important part of an effective nighttime VR 360 solution. Good noise performance at high ISOs is important. Full-frame sensors are better than APS-C or smaller to minimize the number of frames needed to cover the sky.

My solution

My personal solution to the 360 ​​VR panorama problem is to use full frame Nikon D850, Nikon D600or (modified) Canon RP. In fact, I prefer the D850 because of its better noise and higher pixel count.

For a lens, I use a Sigma 15mm f/2.8 . fisheye lens. The lens is relatively inexpensive, and the manual focus and aperture rings allow it to be used with the adapter on the Canon RP in addition to the Nikon cameras I normally use. At f/2.8, lens aberration at the edges isn’t great, but proper overlap and stitching can correct many problems.

For the tripod, I use a variety of Bogen/Manfrotto aluminum tripods that are sturdy enough for long exposures. At the beginning of the video or camera head with panning capability (i.e. not the ball head) is what I usually use. Any type of head that rotates horizontally around the center screw of the camera mount will do (i.e. not a ballhead).

Normally I would just go with a Commercially available 360 ​​VR rotating head, but during the initial testing phase, I threw together a vertical pan using excess aluminum. It worked so well for me that I didn’t need to replace it. This setting holds the camera in portrait orientation with the lens pivot point at the center of both portrait and landscape rotation. It also has enough adjustment to accommodate almost any camera body I own.

For quick setup and removal of another cumbersome assembly, I use Manfrotto quick release clamp on VR head, tripod and camera.


Shooting Process

When setting up, I first make sure that the top of the tripod is level so that the swivel head actually rotates in a truly horizontal plane. This ensures that post-processing requires as little manual intervention as possible.

Next, I made sure that the 360 ​​VR rotation setting was adjusted so that the axes of camera movement were aligned with the nodal point of the lens (near the front of the lens). All camera settings are in manual mode so focus, f-ratio and exposure time remain constant across the entire frame. In general, I use an exposure of about 15 seconds at f/2.8, but in some cases I want to switch to f/4 or even f/5.6 because of the smaller edge aberration. In addition, this limits the number of faint stars picked up, making it easier to locate constellations in the final sky panorama.

My predefined ideal shooting pattern is to shoot 6 frames with the camera pointing about 60 degrees above the horizon, with the first frame about 30 degrees clockwise from the pole, and with subsequent frames. according to rotate 60 degrees clockwise from there. This will place the last frame on top of the polar region with the first image. Since the first and last images are farthest apart in time, the worst-case scenario would be to misplace the stars. But since the frames are closest to the pole, the misalignment error is minimized.

Next, I adjusted the tilt so that the camera was pointing straight up, and for a good measure, I took two photos, with the second rotated 90 degrees from the first zenith photo. Ideally, zenith shots should not include any landscape elements. They are used to provide distortion-free frames of the zenith as well as to provide junctions to the sky in lower elevation frames. If any landscape elements are captured in the climax, they may be masked in post-processing.

For all individual frames, precise pointing is not necessary. Before starting a sequence of photos, I note down landmarks to center each photo rotated by 60 degrees and use these points to rotate the setup, taking as little time as possible between frames Figure. To reduce the possibility of shake, I use the camera’s 2 second shutter lag when I am not using the remote shutter release function.

If the ground is included in the panorama to fill in the “hole” below the horizon, a second set of images will be taken with the camera facing below the horizon. They must not include stars or use masks to exclude the area above the horizon when compositing panoramas. If foreground subjects are important to your panorama, you may also want to refocus before capturing frames on the ground.

In the panorama above, all observatories atop Mauna Kea on the island of Hawaii are visible. The sky is blue because the Moon has risen. It is located behind the nearest observatory building. The orange line is a guided stellar laser used to cancel out atmospheric turbulence. A “glitch” can be seen in the laser in this shot because the telescope and laser are tracking the moving target while the panoramic frames are being shot.


Post processing

To process the 360 ​​VR panorama, I started with importing the frameset into Lightroom. Adjustments such as color balance and exposure can be made here, as long as the same adjustments are applied to the entire frame. Airplane trails can also be cloned at this stage. Lens corrections (distortion and vignetting) should not be applied at this time, as my recommended panorama stitching program (PTGUI) takes into account lens distortion.

After the skeletons have been adjusted, they can be processed using PTGUI. In PTGUI, unwanted parts of individual frames (e.g. landscapes in zenith shots) can be masked. If PTGUI has trouble finding matching points in the sky automatically, these points can be added manually. Getting acquainted with the constellations in the sky will help a lot if this step is necessary.

Once the panorama has been stitched, the center of the panorama can be set in PTGUI. If necessary, the horizon line can also be straightened. For the final output, PTGUI can generate web page output, complete with web code to allow interactive viewing in a web browser. Also, PTGUI’s local viewer offers a similar experience on your PC.

One of the easiest ways to share an interactive experience is to upload your panoramas to Google Photos. To do this, get a 2:1 equivalent rectangular 360×180 JPEG file from PTGUI, reduce the size to a maximum of 8,000×4,000 pixels, and upload the file to Google Photos. Then use Google Photos to create a public link to the photo to share.

Conclusion

Take the challenge and try your hand at taking panoramic photos of the night sky. Vivid, interactive sky panoramas let you enjoy the night sky on a cloudy day as well as help you get used to the night sky.





Source link

news7g

News7g: Update the world's latest breaking news online of the day, breaking news, politics, society today, international mainstream news .Updated news 24/7: Entertainment, Sports...at the World everyday world. Hot news, images, video clips that are updated quickly and reliably

Related Articles

Back to top button