Skip to main content

A closed form unwrapping method for a spherical omnidirectional view sensor

Abstract

This article proposes a novel method of image unwrapping for spherical omnidirectional images acquired through a non-single viewpoint (NSVP) omnidirectional sensor. It has three key steps i.e. (1) calibrate the camera to obtain parameters describing the spherical omnidirectional sensor, (2) map world points onto mirror points and, subsequently, onto image points, and (3) set up the projection plane for the final image unwrapping. Based on the projection plane selected, the algorithm is able to produce three common forms of unwrapping, namely cylindrical panoramic, cuboid panoramic, and ground plane view using closed form mapping equations. The motivation for developing this technique is to address the complexity in using a NSVP omnidirectional sensor and ultimately encouraging its application in robotics field. One of the main advantages of a NSVP omnidirectional sensor is that the mirror can often be obtained at a lower price as compared to the single viewpoint counterpart.

1 Introduction

The omnidirectional view sensor has gradually emerged as a popular and effective vision sensor in the field of robotics. In most cases of application, the large field of view (FOV) provided by the sensor allows simultaneous monitoring of the surrounding in different view angles and therefore enables a more flexible and responsive algorithmic behaviour. Among the several configurations, the catadioptric version has received relatively more attention than its dioptric counterpart. Rees [1] first suggested that a hyperboloidal mirror mounted on a perspective camera would enable a 360° FOV on the camera. Subsequently, it was realised by Yamazawa et al. [2] and concurrently several other types of mirror profile were also introduced, such as conical [3], spherical [4], and paraboloidal [5]. a A summary of the omnidirectional view sensor classification is provided in Figure 1.

Figure 1
figure 1

Classification of omnidirectional view sensor. The common catadioptric omnidirectional sensor mirror profiles with practical SVP solution are the hyperboloidal and paraboloidal ones. The category “others” in the figure refers to profiles that are not of conic section. Spherical and conical are the common NSVP mirror profiles. Note that the SVP mirror profile is also a subset of NSVP. If the requirement of SVP constraint is not met, they fall back to NSVP category. Dioptric is another category of omnidirectional sensors beyond the discussion scope of this paper.

However, as a trade-off for a large FOV, the mirror’s curvature causes an unfavourable distortion in omnidirectional images. Therefore, a pre-processing termed “unwrapping”, is often necessary for a perspectively-correct alteration on the image acquired. Mirrors with a single viewpoint (SVP) property [5, 6] such as hyperboloidal, paraboloidal, and a specifically designed mirror [7] for example, can be represented by a common sphere model [8] that allows effective and unified camera calibration techniques [918] or otherwise by approximation models [19, 20]. These calibration methods subsequently allow unwrapping on the images acquired. For instance, one of the approaches was presented by Lei et al. [21], where they had demonstrated two forms of panoramic unwrapping—cylindrical and cuboid.

Calibration on non-SVP (NSVP) mirror profiles, such as those of conical and spherical, usually require specific modelling based on the particular mirror shape used [2224], by approximation model [22, 23, 25], or otherwise, for example, by the use of polarised imaging [26]. NSVP mirrors are popular choices of catadioptric omnidirectional sensors due to the immediate availability and lower cost, particularly for the spherical one. Further evaluation on the advantage of spherical mirror profile is provided in Section 2. Among the earlier related works on NSVP omnidirectional image unwrapping, Gaspar et al. [27, 28] had worked on unwrapping of spherical omnidirectional images into ground plane view (bird’s eye view). Later, Jeng and Tsai [29] proposed a mirror-invariant technique for panoramic unwrapping by means of ground-truth information calibration.

In this article, a closed-form solution for spherical omnidirectional image unwrapping incorporating advantages from the different techniques is presented. Different from previous works, the proposed unwrapping technique (1) does not require any prior knowledge on sensor parameters or ground-truth information, (2) produces output that scales accordingly with the resolution of the image, (3) utilises closed form forward and backward mapping equations, and (4) is designed for multiple output forms such as cylindrical panoramic view, cuboid panoramic view, and ground plane view unwrapping.

The proposed approach attempts ray tracing of the light source in the surrounding captured by the camera of the omnidirectional sensor. By acquiring the mathematical model that describes the direction of ray, the desired form of unwrapping can be achieved by choosing an appropriate 3D projection plane. There are essentially three stages of procedure required in order to complete the unwrapping. The first stage requires the perspective camera of the omnidirectional system to be calibrated and thus an equivalent radius and height of the spherical mirror can be geometrically deduced based on the resolution of the image. The second stage would make use of the parameters to provide a closed-form analytical solution to the ray tracing of the imaging system that enables corresponding mapping of world points and their equivalent image points. The last stage of the procedure is to set up the projection plane of the unwrapped image in a 3 dimensional (3D) space.

The rest of the article is organised as follows: In Section 2, the use of a spherical mirror profile is justified despite the complexity introduced, and in Sections 3, 4, and 5, the three key stages are explained. In Section 6, issue on the output quality of the algorithm is discussed. Conclusion of the work is provided in Section 7. Finally, Appendix provides detailed derivations of several important equations used in this article.

2 Evaluation on a spherical omnidirectional sensor

Similar to other NSVP mirror profiles, the spherical mirror profile is often unfavourable in various applications that require mapping between an image point and its corresponding unwrapped counterpart due to the lacking of a practical solution for SVP formation. However, the benefit of utilising the spherical mirror should not be overlooked as it may provide an omnidirectional sensor solution that is justifiably more practical at present. Firstly, a spherical object with a polished surface is easily accessible and so the cost is reasonably low as compared to other SVP mirror profiles because they are mostly custom made using computer numerical control (CNC) machine. Strictly speaking, a SVP formation has rather demanding requirements, which if not met, would render the SVP property of the mirrors an approximation at best. Therefore for most of the time, they are practically of NSVP setting.

Secondly, as a para-catadioptric sensor is not affected by vertical translational error in fabrication [5, 6], the spherical mirror is invariant to rotational error up to a certain degree because it is rotationally symmetrical in all axes as illustrated in Figure 2. Apart from that, since it is not crucial to maintain a specific distance between a NSVP mirror and the camera’s effective focal point along the optical axis, the spherical mirror is also invariant to translational error along the optical axis. In short, the design constraint for a spherical omnidirectional sensor is more relaxed and can tolerate fabrication error.

Figure 2
figure 2

Spherical mirror’s toleration on rotational error. A spherical mirror is technically a hemisphere. The other half is usually not useful therefore is rarely incorporated into the sensor design. Since the viewing angle does not encompass the complete hemisphere, it can often tolerate rotational error up to a certain degree, θ. The vertical dash line represents the optical axis.

Thirdly, parameters describing a spherical model are as minimal as two parameters—its radius and its position on the optical axis. As compared to other more complex shapes, its parameter calibration is theoretically simpler and straight forward as described in Section 3.

3 Stage 1-camera calibration

Prior to ray tracing, calibration is needed to estimate the two parameters describing the spherical mirror—radius, R, and the distance between its centre and the projection centre, h. Perspective cameras are generally modelled as shown in Figure 3. The parameters describing the model are usually grouped and termed as the intrinsic parameters. These information is useful in mapping the relationship between an image pixel and its corresponding world point in 3D space.

Figure 3
figure 3

Perspective camera model. A perspective camera can be modelled by the intrinsic parameters. An optical axis passes through the centre of projection, C, and the principal point, C, that lies on the perpendicular image plane, I. The distance between C and c is termed focal distance, f.

At present, methods to calibrate the intrinsic parameters of SVP omnidirectional cameras with the mirror attached are well established as reported in [10, 19, 30, 31]. For NSVP mirror profiles such as those of conical and spherical, a unified calibration algorithm requires polarisation imaging [26] for instance. However, since the sphere has only two parameters, it can be easily deduced from a calibrated camera instead [24], such as by the use of Zhang’s [32] method. Subsequently, the camera’s intrinsic parameter, K, can be obtained in the following form:

K= f x s u o 0 f y v o 0 0 1
(1)

where f are the focal distances with subscript x and y denoting the respective axes, s is the skew of pixel, and c=[u o ,v o ]T is the principal point. Without any loss of generality, a sensor with rectangular pixel is assumed where s=0. In practice, the actual sensor in perspective camera may not be necessarily square and will therefore introduce an aspect ratio as in f y /f x . However, the two focal distances are often very similar rendering f y /f x ≈1. With negligible margin of error, we have treated f x =f y in our application, and avoided extra computation to correct the input image due to a non 1 : 1 aspect ratio. The calibrated result of the perspective camera is shown in Table 1. For brevity, f x and f y are subsequently denoted by f.

Table 1 Calibrated values of the intrinsic parameters of the perspective camera in pixels

When a sphere is projected onto an image plane, a circular feature is obtained as shown in Figure 4. The calibrated f is useful in estimating the radius of the spherical mirror based on the resolution of input image. The ground-truth radius of the sphere is not necessary because with limited quality of the input image, the resolution of the unwrapped image should be logically sufficient based on the resolution of the input image. Therefore, we establish the existence of a virtual sphere, assuming to be located at a constrained distance intersecting the image plane based on the calibrated f. R and h are then remapped from Figure 4 onto the virtual sphere as illustrated in Figure 5.

Figure 4
figure 4

Projection of sphere on image plane. When a sphere is projected onto an image plane, a circular feature is obtained. The parameter R describes the spherical mirror’s radius while h is the distance between the spherical mirror and the centre of projection, C. However, the radius of the circular feature does not represent an equivalent radius of the sphere due to the viewing angle, ζ.

Figure 5
figure 5

Real world image acquisition model. The actual perspective camera collects light rays that enter its effective pupil introduced by a perspective lens. The intensity of the light rays is then measured by the CCD/CMOS sensor plane placed behind the effective pupil. Distance of the effective pupil to the sensor plane is the effective focal length. The effective pupil is equivalent to the centre of projection, C, of the perspective camera model where the image plane, I, can be imagined to be located further behind the sensor plane at the distance of its focal distance, f, forming I flipped . As shown in the figure, the perspective camera model is placed overlapping the real world image acquisition figure with the image plane placed in front of the centre of projection. Based on the obtained focal distance, an equivalent virtual sphere mirror is visualised to be intersecting the image plane at a constrained distance, h. The projection of the sphere on the image plane can also be visualised as a compressed version of the virtual sphere. Note that parameter R and h are remapped from the actual spherical mirror (in Figure 4) onto the virtual sphere.

Prior to parameter estimation, the radius of the circular feature, ρ, on the image plane is first estimated using Hough Circle Transform [33]. Geometrically, it is understood that ρR as illustrated in Figure 5. Due to the viewing angle, ζ, of the perspective projection from C, certain portion of the actual spherical mirror/virtual sphere will not appear on the image plane. To estimate the remapped R, two assumptions are made where (1) the centre of the spherical mirror, therefore the centre of the virtual sphere, coincides the optical axis, and (2) the spherical mirror is at least a “hemisphere” visible to the perspective camera. Although the first assumption may not be necessarily true as fabrication error will always results in misalignment, it is a reasonable approximation [15, 19, 20, 29] and will later greatly simplify the coordinate mapping in Section 4. The remapped R and h can be derived by the method of similar triangles as follows:

R= ρ f l=ρ 1 + t 2
(2)
h= l 2 + R 2 =l 1 + t 2
(3)
l = f 2 + ρ 2 t = tan ζ 2 = R l = ρ f

From Equations (2) and (3), 1 + t 2 is easily observed as a correction factor for the parameters. The estimated R and h will be useful in completing the mapping of world points to equivalent image points, and vice verse. The estimated parameter values for our spherical mirror are shown in Table 2.

Table 2 Calibrated values of the intrinsic parameters of the perspective camera in pixels

In Table 2, the estimated principal point, which is the centre of the circular feature on the image plane obtained using Hough Circle Transform, is easily translated into the centre of the virtual sphere. The error introduced when comparing with that from the intrinsic parameter calibration in Table 1 is marginal. Since the assumption that the optical axis coincides with the centre of the virtual sphere was made, the estimated principal point from Table 2 is used instead of those from Table 1.

4 Stage 2-mapping of points

For a typical catadioptric omnidirectional sensor, incident rays would make contact with the mirror, be reflected with respect to the normal axis at the contacted mirror surface, and subsequently enter the pupil of the perspective camera. Figure 5 illustrates the actual image acquisition of a perspective camera at the presence of an effective pupil and effective focal length introduced by a perspective lens with the perspective camera model overlapping over. The perspective camera model has an equivalent structure where the centre of projection is analogous to the effective pupil. With this constraint, ray tracing from the image plane to an equivalent world coordinate is made possible. The image plane in an actual perspective camera is logically located behind the effective pupil but in the perspective camera model, it has been placed in front of the centre of projection. This is useful so that the image pixels are not mathematically flipped in both axes and the common image coordinate system can be applied directly for pixel referencing.

In Section 3, an assumption has been made where the spherical mirror’s centre coincides with the optical axis and subsequently the geometry of light ray made is assumed to be rotationally symmetrical about the optical axis. Therefore within a 3D space where points are represented in Cartesian form of P(x,y,z) with the optical axis labelled as z-axis, ρ= ( x 2 + y 2 ) can be introduced to reduce the dimension of our problem into that of a 2D. Without any loss of generality, we position our origin at the centre of the virtual sphere. Figure 6 illustrates the important geometry used for subsequent derivations with a reduced dimensionality.

Figure 6
figure 6

Important geometry for mapping functions derivation. The corresponding relationship between an arbitrary world point, P w , mirror point, P m , image point, P i , and caustic point, P c , in the ρ-z plane are provided by the caustic curve and mirror parameters. Important geometry for equation derivation is also illustrated.

In general, the point mapping process is a two-step translation process. The first step translates points between image and mirror while the second step translates points between mirror and world. Mirror points are points on the spherical mirror where lights are reflected while world points are the points of light source in the 3D space. In subsequent derivations, variables with subscript i, m, C, and w represent their respective image, mirror, caustics and world component. The idea of caustic is discussed in Section 4.2. For brevity, functions that map in the order of P i to P m to P w are subsequently denoted as “forward mapping functions” whereas the reversed counterparts are denoted as “backward mapping functions”.

Similar derivations of such a point mapping process had been previously attempted. Micusik and Pajdla [22, 23] had not provided a closed-form two-way translation but relied on numerical search for the backward mapping function. That had been addressed by Agrawal et al. [24] in an approach which is rather comparable to our method. They had shown that the backward mapping is achievable via a forth order equation while we have employed a sixth order one due to a tight integration with the caustic geometry. However, on the forward mapping, our approach involves less operations to complete with 11 additions/subtractions, 23 multiplications/divisions, and 3 square roots (2 square roots for cylindrical unwrapping) instead of 38 addition/subtraction, 55 multiplications/divisions, and 2 square roots as in [24]. While both of our mapping functions operate in the same P(ρ z) space, Agrawal et al. [24] derivations use P(ρ z) space in the forward mapping but P(x y z) space in the backward mapping. Thus, our approach provides an advantage in cylindrical unwrapping that has a constant ρ w .

4.1 Mapping between image point and mirror point

The reflected light from the mirror is assumed to pass through the centre of projection of the perspective camera model (which is analogous to the camera pupil). In Figure 6, an arbitrary mirror point, P m (ρ m ,z m ), and its corresponding image point, P i (ρ i ,f), are illustrated as the intersection points of the reflected light made with the virtual spherical mirror and the image plane, respectively. The backward mapping function of ρ i , given that P m is known, is straight forward by using the method of similar triangles:

ρ i = f h - z m ρ m
(4)

On the other hand, the forward function of P m given ρ i demands the intersection point of a straight line equation representing the reflected light, k(ρ)=m k ρ+c k and the equation of the virtual sphere, s(ρ)= R 2 - ρ 2 . Geometrically, m k and c k can be easily identified as - f ρ i and h, respectively. Equating both equations at ρ=ρ i yields a quadratic equation as shown in Equation (5).

k ρ i = s ρ i m k ρ i + c k = R 2 - ρ i 2
m k 2 + 1 ρ i 2 + 2 m k c k ρ i + c k 2 - R 2 =0
(5)
ρ m = fh - - h 2 ρ i 2 + f 2 R 2 + ρ i 2 R 2 f 2 + ρ i 2 ρ i
(6)

Mathematically, the other of solution of Equation (5) is always larger than Equation (6) and therefore is rejected by observing the geometry of the intersections. Subsequently z m can be obtained either from s(ρ) or k(ρ) as follows:

z m =s ρ m = R 2 - ρ m 2
(7)

4.2 Mapping between mirror point and world point

In [5, 6, 34], caustics of several NSVP omnidirectional sensors were extensively reviewed. A caustic is defined as the locus of viewpoints forming a surface that is tangential to the incident rays. The centre of projection of the perspective camera model is analogous to the effective pupil of the camera, therefore a virtual caustic is applied onto the virtual sphere defined in Section 3. Assuming that the spherical mirror’s centre coincides with the optical axis, the caustic surface is rotationally symmetrical about the optical axis and is therefore reduced to a curve in ρ-z plane. Before subsequent mapping functions can be derived, the caustic model has to be first investigated.

4.2.1 Caustic curve

A general model for caustic curve that is compatible for all conic section mirror profiles has been provided by Swaminathan et al. [34]. However, since the generalisation is not important for our application, we opt for a simpler model derived specifically for a spherical mirror provided by Baker and Nayar [5, 6].

The parametric equations for the spherical mirror caustic curve are derived in [5, 6] assuming R=1. In order to serve our purpose, the parametric equations have been reformulated with R as a parameter instead. Firstly, an incident ray reflected at a mirror point P m ρ m , z m = ρ m , R 2 - ρ m 2 is described by a straight line equation, j(ρ m ), and this is repeated for point P m ρ m + d ρ m , z m + d z m . Secondly, the intersection point of j(ρ m ) and j(ρ m +d ρ m ) is obtained by taking the limit d ρ m →0 while applying a constraint on dz with (ρ m +d ρ m )2+(z m +d z m )2=R2. Denoting a point on the caustic curve as P c (ρ c z c ), the result of the derivation is shown in Equations (8) and (9). Detailed derivations can be found in Appendix. Figure 7 shows sample plots of the caustic curve of a spherical mirror with = 1 described in parametric equations with varying h.

lim d ρ m 0 ρ c = 2 h 2 ρ m 3 R 2 R 2 + 2 h 2 - 3 h R 2 - ρ m 2
(8)
Figure 7
figure 7

Sample plots of the caustics curve of spherical mirror. Sample plots of the caustic curve of a spherical mirror of R=1 with varying h. The solid line represents the spherical mirror.

z c = h - R 6 + h R 2 R 2 - ρ m 2 2 ρ m 2 + 3 R 2 + h 2 4 ρ m 4 - 2 ρ m 2 R 2 - 2 R 4 R 2 R 2 + 2 h 2 - 3 h R 2 - ρ m 2 R 2 - 2 h R 2 - ρ m 2
(9)

4.2.2 Point mapping

The forward mapping function of P m to its corresponding world point, P w (ρ w ,z w ) is obtained by differentiating Equations (8) and (9) with respect to ρ m using the chain rule of calculus. This thus forms the gradient, m j , of the incident ray, j(ρ). However, due to the lost of depth information during the image acquisition process, constraint on either z w or ρ w is needed. Mathematically, the constraint is needed because j(ρ) is valid for ρ=(ρ m ,). In practical application however, a constraint on ρ w is better than z w because at certain ρ m , m j will change sign. More discussion is provided in Section 6. Assuming ρ w is to be constrained, z w is then given by:

z m - z w ρ m - ρ w = m j = d z c d ρ m / d ρ c d ρ m
z w = z m - ρ m - ρ w d z c d ρ m / d ρ c d ρ m
(10)

where m j is derived from the caustic curve provided in Equation (13).

In order to derive the backward mapping functions of P w to its corresponding P m , more measures have to be taken. An incident ray would pass through a P w and is tangential to the caustic curve at P c . This relationship can be exploited as a sixth order polynomial shown in Equation (11). Detailed derivation can be found in Section 4.3.

16 h 4 z w 2 + ρ w 2 ρ m 6 - 16 h 4 R 2 ρ w ρ m 5 + 4 h 2 R 2 h 2 R 2 + 2 h R 2 z w + - 8 h 2 + 2 R 2 z w 2 + - 8 h 2 + 2 R 2 ρ w 2 ρ m 4 + 4 h 2 R 4 6 h 2 - R 2 - 2 h z w ρ w ρ m 3 + R 4 - 4 h 4 R 2 + h 2 R 4 + - 8 h 3 R 2 + 2 h R 4 z w
+ - 4 h 2 + R 2 2 z w 2 + 20 h 4 - 12 h 2 R 2 + R 4 ρ w 2 ρ m 2 - 2 h R 6 4 h 2 - R 2 h - z w ρ w ρ m - R 6 4 h 4 - 5 h 2 R 2 + R 4 ρ w 2 = 0
(11)

By solving Equation (11), the corresponding ρ m can be obtained and z m = R 2 - ρ m 2 can be determined accordingly. Since 0≥ρ m ρ m,max, there will only be one valid solution for Equation (11). ρ m,max is easily observed as the maximum ρ m due to the maximum viewing angle, ζ, as follows:

tan ζ 2 = ρ m , max R = h 2 - R 2 h
ρ m , max =R 1 - R h 2
(12)

4.3 Derivation of world-to-mirror mapping function

In order to derive Equation (11), it is first considered that an incident ray passes through a P w (ρ w ,z w ) and is tangential to the caustic curve at P c (ρ c ,z c ). Therefore, the mapping of a P w to its corresponding P m (ρ m ,z m ) can be done by making use of the gradient of the incident ray, m j , which can be obtained by differentiating Equations (8) and (9) with respect to ρ m using the chain rule of calculus.

m j = z c - z w ρ c - ρ w = d z c d ρ m / d ρ c d ρ m
(13)
= h R 4 + R 4 R 2 - ρ m 2 + 2 h 2 2 ρ m 2 - R 2 R 2 - ρ m 2 ρ m R 4 + 4 h 2 ( ρ m 2 - R 2 )
(14)

Substituting Equations (8) and (9) into Equation (13) and subsequently rearranging them would then yield Equation (15). By discarding the denominator of Equation (15), the square root functions are then eliminated forming Equation (16).

R 4 h ρ w - ρ m + R 2 - ρ m 2 ρ w - ρ m z w + 2 h 2 2 ρ m 2 ρ w - R 2 ρ w - ρ m R 2 R 2 - ρ m 2 + 2 ρ m z w R 2 - ρ m 2 ρ m R 4 + 4 h 2 ρ m 2 - R 2 = 0
(15)

Finally, not shown in this article due to the length of the equation, Equation (16) is summed to a common denominator and subsequently the denominator is again discarded. Collecting the ρ m terms hence forms Equation (11).

- R 2 + ρ m 2 + - h R 4 ρ m + 4 h 2 R 2 z w ρ m - R 4 z w ρ m - 4 h 2 z w ρ m 3 + h R 4 ρ w 2 - 2 h 2 R 2 ρ m - 2 h 2 R 2 ρ w + R 4 ρ w + 4 h 2 ρ m 2 ρ w 2 =0
(16)

5 Stage 3-projection plane for unwrapping

A virtual projection plane is assumed to be an imaginary 2 dimensional (2D) plane of light source. The illuminated light from the plane would travel towards the virtual sphere and thus be reflected into the camera’s pupil. By using the forward mapping functions derived in Section 4, the corresponding incident ray of a specific image point can be traced, and subsequently a world point where the incident ray coincides with the virtual projection plane can be obtained. Then, the backward mapping functions will be used to populate the virtual plane using common interpolation technique (i.e. bilinear interpolation). Finally, the plane itself results into an unwrapped image. In addition, by selecting the shape and position of the virtual plane, different forms of unwrapping is possible. Lei et al. [21] had documented the idea of virtual plane in details and they had demonstrated two forms of panoramic unwrapping—cylindrical and cuboid. Cuboid panoramic unwrapping is done by replacing the cylindrical virtual plane with a cuboid one. For our case, we will demonstrate that a ground plane view is also possible with our method by choosing an appropriate projection plane as illustrated in Figure 8c.

Figure 8
figure 8

Projection planes for unwrapping. By choosing an appropriate projection plane, the algorithm is able to produce different unwrapped views, including (a) the cylindrical panoramic view, (b) the cuboid panoramic view, and (c) the ground-plane view.

In order to take advantage of a unified platform with multiple form unwrapping output capability, the mapping functions have been conveniently made to accept and produce points in the Cartesian form. Therefore, the location of each element in a virtual plane (ends up as an image pixel) described in Cartesian points would be easily translated into their respective image points. Generally, a lookup table of corresponding points will be generated so that subsequent unwrapping can be speeded up. Such practice is commonly applied in omnidirectional image unwrapping field and is documented in details in Jeng and Tsai [29] work.

5.1 Cylindrical panoramic unwrapping

Cylindrical panoramic unwrapping can be done using an open-ended cylinder plane wrapping around the spherical mirror as shown in Figure 8a. Since a cylinder is rotationally symmetrical about its central axis and by letting it coincides with the optical axis, points on the projection plane described in cylindrical coordinate system (e.g. (ρ,φ,z)) will have a similar mapping of ρ and z for all φ. Therefore, only one set of mapped coordinate is required.

The cylinder will have a user-defined radius in pixel, r k , and thus the width of the unwrapped image is roughly 2Π r k pixel. The height of the cylinder is dependent on the region of image to be unwrapped and is also user-defined. Due to the nature of the spherical mirror, unwrapping is only suitable up to a certain region of the mirror. More will be discussed in Section 6. Figure 9a shows the set-up of our experiment in a controlled environment. The captured image is shown in Figure 9b with cylindrical panoramic unwrapping shown in Figure 10.

Figure 9
figure 9

Experiment set-up for panoramic unwrapping. A controlled environment for panoramic unwrapping is set up as in (a) and corresponding omnidirectional view image is captured as shown in (b).

Figure 10
figure 10

Cylindrical panoramic unwrapping. Common cylindrical unwrapping can be done using an open-ended cylindrical virtual plane.

5.2 Cuboid panoramic unwrapping

Cuboid panoramic unwrapping is an enhanced version of the cylindrical one. As documented in [21], a cuboid projection plane will artificially create a perspective view of the surrounding. The output of this method results in a more natural view of the surrounding for human eye perception and is particularly effective if the surrounding is a rectangularly confined space. Figure 11 is the result of unwrapping Figure 9b using cuboid projection plane as illustrated in Figure 8b. The result shown assumed a cuboid placed at the centre of the virtual plane with upright orientation to the x-axis but x-axis but in practice, a rectangular one would work equally well. The shape, position, and orientation of the cuboid projection plane is mainly dependant on the boundary of surrounding space (e.g. walls, partitions, building etc.).

Figure 11
figure 11

cuboid panoramic unwrapping. An appropriate cuboid virtual plane produces cuboid panoramic unwrapping.

5.3 Ground plane view unwrapping

Ground plane view unwrapping generates an output that appears perspectively correct as if the image were captured from some height above. A more commonly known term for ground plane view is the bird’s eye view. As the name suggests, ground plane view unwrapping is mainly used to detect features on the ground. While panoramic unwrapping can include ground features, they would introduce low quality unwrapping due to insufficient data point (image pixel) near the centre of the mirror and rendering less useful interpolated data. Previous work by Hicks and Bajcsy [35] performed analogue ground plane correction using a specialised mirror profile. Another work by Gaspar and Santos-Victor [27] corrects distortion on ground feature by solving the geometry made by the captured light rays.

In Figure 12, a controlled environment to demonstrate ground plane view unwrapping is set up. To adapt to such form of unwrapping using the existing derived mapping functions, a projection plane that is normal to the optical axis is simply placed some distance away from the virtual sphere instead of upright project planes used in the previous two panoramic unwrapping schemes as shown in Figure 8c. Illustration of the result is shown in Figure 13.

Figure 12
figure 12

Experiment set-up for ground plane view unwrapping. A controlled environment for ground plane view unwrapping is set up as in (a) and a corresponding omnidirectional view image is captured as shown in (b).

Figure 13
figure 13

Ground plane view unwrapping. Ground plane view unwrapping is compatible with the proposed unwrapping scheme by choosing a projection plane that is normal to the optical axis.

5.4 Assessment on accuracy

In order to assess the accuracy of the mapping functions, two experiments were conducted. The first experiment measures the fit of horizontal and vertical lines on a checker-box pattern. The second experiment reconstructs 3D points of a trihedron with checker-box pattern from two views.

5.4.1 Line fitting

In this experiment, a checker-box pattern was initially captured and unwrapped into cuboid panoramic view. Then, Harris and Stephens [36] corner detection algorithm was used to capture the corner points in the unwrapped checker-box pattern. For any ambiguities due to detection of multiple corners at the same point, the centroid of the cluster was used instead. Subsequently, horizontal and vertical lines were fitted using linear regression model to the points as shown in Figure 14. Note that for vertical lines, the axes were flipped so that fitting is possible. Throughout the entire process, human interaction was made minimal where user only specifies the total points to detect and a coarse estimation of the location of the points.

Figure 14
figure 14

Accuracy assessment of mapping functions. Horizontal and vertical lines are fitted on a checker-box pattern unwrapped into cuboid panoramic view as in (b) to assess the accuracy of the mapping functions. (a) is the same pattern taken using a perspective view camera.

In Table 3, an analysis of the line-fitting in Figure 14b is presented. The mean gradient suggested the “straightness” of the fitted lines while mean R2 suggested the “goodness” of fit of the points involved. As can be seen, the mean gradient and mean R2 approach 0 and 1 respectively, which imply a proportionally high degree of correctness in the relative position of the points involved as they are mapped from omnidirectional view to cuboid panoramic view. A second analysis was done on the spacing, Δ, between the fitted lines. Let Δ y be the mean spacing of horizontal lines while Δ x be mean spacing of vertical lines, the ideal benchmark checker-box pattern of Figure 14a should produce a ratio of Δ y / Δ x =1 neglecting lens distortion. On the unwrapped checker-box pattern, it is found that Δ y / Δ x =0.93, indicating an error of 7% in the ratio after unwrapping.

Table 3 Analysis of line fitting on Figure 14 b

This simple line fitting experiment was meant to show a preliminary assessment of the mapping functions without involving complex algorithm. Since it is not a common practice in previous works, a benchmark for the result in Table 3 is not possible. Thus, another experiment that is more commonly conducted, i.e. 3D reconstruction in Section 5.4.2, was carried out to provide further evaluation on the mapping functions.

5.4.2 3D reconstruction

In this experiment, a 3D reconstruction [37] of a trihedron with checker-box pattern was conducted. Initially, two images of the trihedron in Figure 15 were captured from two different viewing locations and unwrapped into cuboid form. Then, a total 92 pairs of corresponding points from the two views were sampled manually. From the corresponding points, the fundamental matrix was first computed and subsequently the two camera matrices, P 1 and P 2, associated with the two views were deduced. As P 1 is set at I|0, this resulted in a projective reconstruction of the corresponding points by linear triangulation method. Finally, the reconstructed 3D points are upgraded to a metric reconstruction [37].

Figure 15
figure 15

A sample trihedron. A sample trihedron with checker-box pattern for the 3D reconstruction experiment.

From the reconstructed trihedron points, several measurements were obtained to assess the accuracy of the mapping equations. Figure 16 shows the distribution of the calculated root-mean-square error (RMSE) of the 3D points with an average of 14.54 mm. Without further optimisation on the reconstruction (i.e. bundle adjustment), the error obtained is observed to be within a similar range as previous works [20, 22]. Finally, the 3D points were reprojected back to the input image as shown in Figure 17 and the reprojection error of the points were measured as shown in Figure 18. The average reprojection error of the 3D points was found to be 0.22 pixels.

Figure 16
figure 16

RMSE of the 3D reconstruction. The average RMSE of the reconstructed 3D points is 14.54 mm.

Figure 17
figure 17

Reprojection of the 3D reconstruction. The reconstructed 3D points are reprojected back to the input images.

Figure 18
figure 18

Reprojection error of the 3D reconstruction. The average reprojection error of the reconstructed 3D points is 0.22 pixel.

5.4.3 Sources of error

The assessments on accuracy revealed a certain degree of error introduced by the mapping functions. The possible sources of error include the assumptions made in the derivations of the mapping functions. An ideal spherical mirror is assumed in the derivations while this may not be always true in practice. Also, the spherical mirror’s centre might not coincides with the optical axis perfectly. Factors such as these could also affect the accuracy of parameter calibration for f and R, which eventually lead to error compounding as mapping is processed.

Other than that, the lens of the camera could be ideally assumed to provide a perfect perspective view projection. Slight fish-eye distortion might be introduced as the omnidirectional view image is captured. Lastly, sampling error, either manually or using Harris corner detection algorithm [36], is inevitable.

6 Image quality and algorithm limitation

Due to heavy dependency on ray tracing in the proposed algorithm, different unwrapping forms are optimum only in certain regions on the omnidirectional image that is radially confined from the centre of the mirror. For ground plane view unwrapping, incident rays with m j ≤0 do not intersect an x-y plane placed below the virtual sphere, which translates to m k ≥0 for reflected rays. Theoretically, unwrapping cannot be done with the mentioned condition. In practical unwrapping however, m j will never reach 0 but converges to it. At the converging region, there is insufficient data point at the input image (image pixel) to perform useful interpolation that results in highly detailed output and therefore should be avoided. Figure 19 shows a plot of ρ i versus ρ w in pixels, where ρ i gradually converges to a limit as ρ w progresses, indicating that ρ w is roughly represented by similar ρ i as the projection plane expands. The converging region suggests a low quality output region.

Figure 19
figure 19

ρi versus ρ w graph for ground plane unwrapping. For ground plane unwrapping, ρ i gradually converges to a limit as ρ w progresses in pixels. The converging region suggests a degraded quality output region where ρ w are roughly represented by similar ρ i as the projection plane expands.

For panoramic unwrapping, upright projection planes suffer less from the above mentioned limitation. As shown in the solid line plot in Figure 20, ρ i is rather evenly “distributed” across z w indicating that the output is interpolated from a rather evenly spaced data point. However, for practical usage, as points on the input image are represented using Polar coordinates, region closing to the centre of the mirror should be avoided as it is stretched along the angular axis after unwrapped, which produces an output with degraded quality.

Figure 20
figure 20

ρ i versus z w graph and m h versus z w graph for upright plane unwrapping. For upright plane unwrapping, ρ i is rather “evenly distributed across z w as illustrated by the solid line plot (Both are in pixels). Therefore, panoramic unwrapping suffer less quality degradation due to mapping along ρ-axis. The overlapping dashed line plot illustrates the gradient of incident ray, m j , involved. Since m j changes sign at one point, the constraint for Equation 10 is more convenient when z w is chosen.

Also note in Figure 20, an overlapping dashed line plot is provided for the purpose of showing that m j changes sign during the complete mapping. In Section 4.2.2, as Equation (10) is derived, there is a need to provide additional constraint on either ρ w or z w . Mathematically, if ρ w is constrained, there might be two possible corresponding z w depending on the sign of m j at that instance. One of the z w would be invalid by observing the geometry. In forward mapping, this means that an arbitrary z w may not have a corresponding valid solution of ρ w . For practical implementation, constraint on ρ w can be effectively provided by the projection plane and thus the consideration made when deriving Equation (10).

Apart from quality degradation due to the mapping functions, the physical bounding of the surrounding will also affect the “correctness” of the unwrapped image output. Depending on the position, the camera would capture unknown portion of ground plane and upright plane (e.g. walls). While this external factor is beyond the control of the algorithm, a software can be written to facilitate user in providing information on the effective region for unwrapping as shown in using the forward mapping functions.

7 Conclusion

A novel technique of unwrapping for spherical omnidirectional images has been proposed. The algorithm comprises three key stages in which (1) the camera is first calibrated to obtain essential parameters, (2) ray tracing is then utilised to solve the functions that map points back and forth between the omnidirectional image and its unwrapped counterpart, and finally (3) a projection plane is set up for the unwrapping.

The proposed unwrapping scheme enables three commonly performed unwrapping forms, namely, cylindrical panoramic, cuboid panoramic, and ground plane view to be done. The different forms of unwrapping can be achieved by selecting an appropriate projection plane to be populated as the unwrapped image.

Finally, the accuracy of the mapping functions was accessed by conducting a simple line fitting and a 3D reconstruction. The line fitting experiment showed a 7% error in the checker-box pattern ratio. For the 3D reconstruction experiment, the average RMSE was 14.54 mm while the average reprojection error was 0.22 pixel.

Figure 21
figure 21

Selection of region of interest for image unwrapping. User interaction is needed to provide the effective region for unwrapping. An upper bound and a lower bound is needed for panoramic unwrapping whereas for ground plane view unwrapping, only an upper bound is needed.

Appendix

Derivation of a spherical mirror’s caustic curve

In order to derive the caustic curve for a spherical mirror, the gradient of an incident ray, j(ρ,ρ m )=m j (ρ m ρ+c j (ρ m ) at an arbitrary mirror point, P m (ρ m ,z m ), is first derived. Note that for clarity, m j and c j are functions of ρ m whereas j is a function of ρ and ρ m . Prior to this section, they are omitted for brevity. Figure 6 shows that m j (ρ m ) is in fact tanθ where θ=α-2β.

m j ρ m = tan θ = tan α - 2 β = tan ( α ) tan ( β ) 2 + 2 tan ( β ) - tan ( α ) tan ( β ) 2 - 2 tan ( α ) tan ( β ) - 1
(17)
tan α = h - z m ρ m = h - R 2 - ρ m 2 ρ m
(18)
tan β = - d s ( ρ ) ρ m = ρ m R 2 - ρ m 2 ,
(19)

where tanα can be deduced geometrically while tanβ can be obtained from the gradient of the spherical mirror curve, s(ρ). Substituting Equations (18) and (19) into Equation (17) yields:

m j ρ m =- R 2 R 2 - ρ m 2 - h R 2 + 2 h ρ m 2 2 h ρ m R 2 - ρ m 2 - ρ m R 2
(20)

Secondly, the z-intercept of j(ρ,ρ m ), c j (ρ m ), is obtained by examine the relationship of z m =j(ρ m ,ρ m ).

z m = j ρ m , ρ m = m j ρ m · ρ m + c j ρ m

which implies that:

c j ρ m = z m - m j ρ m · ρ m = ρ m R 2 R 2 - ρ m 2 - h R 2 + 2 h ρ m 2 2 h ρ m R 2 - ρ m 2 - ρ m R 2 + R 2 - ρ m 2
(21)

Thirdly, let two incident rays contacting at P m (ρ m ,z m ) and P m ρ m + d ρ m , z m + d z m , their intersection points would form the caustic curve. Points on the caustic curve are denoted as P c (ρ c ,z c ).

z c = j ρ c , ρ m = j ρ c , ρ m + d ρ m m j ρ m · ρ c + c j ρ m = m j ρ m + d ρ m · ρ c + c j ρ m + d ρ m ρ c = c j ρ m + d ρ m - c j ρ m m j ρ m - m j ρ m + d ρ m
(22)

Substituting Equations (20) and (21) into Equation (22) and taking the limit of d ρ m →0 thus results in Equation (8). Accordingly, z c is therefore derived from j(ρ c ,ρ m )=m j (ρ m ρ c +c j (ρ m ), yielding Equation (9).

Endnote

aParaboloidal catadioptric camera and hyperboloidal catadioptric camera are also known as para-catadioptric and hyper-catadioptric respectively in short mainly due to extensive utilisation.

Abbreviations

FOV:

field of view

SVP:

single viewpoint

NSVP:

non-single viewpoint

CNC:

computer numerical control

3D:

3 dimensional

2D:

2 dimensional.

References

  1. Rees DW: Patent 3505465. 1970.

    Google Scholar 

  2. Yamazawa K, Yagi Y, Yachida M: Omnidirectional imaging with hyperboloidal projection. In Proceedings of the 1993 IEEE/RSJ International Conference on Intelligent Robots and Systems ’93, IROS ’93. Tokyo, Japan; 1993:1029-1034.

    Google Scholar 

  3. Yagi Y, Kawato S: Panorama scene analysis with conic projection. 1990, 181-187.

    Google Scholar 

  4. Hong J, Tan X, Pinette B, Weiss R, Riseman EM: Image-based homing. IEEE Control Syst 1992, 12: 38-45.

    Article  Google Scholar 

  5. Baker S, Nayar SK: A theory of single-viewpoint catadioptric image formation. Int. J. Comput. Vis 1999, 35(2):175-196. http://www.springerlink.com/index/WU62M18P65412043.pdf 10.1023/A:1008128724364

    Article  Google Scholar 

  6. Baker S, Nayar SK: A theory of catadioptric image formation. In Proceedings of the Sixth International Conference on Computer Vision, 1998. Bombay, India; 1998:35-42.

    Google Scholar 

  7. Sturzl W, Srinivasan W: Omnidirectional imaging system with constant elevational gain and single viewpoint. In Proccedings of the OMNIVIS - 10th Workshop on Omnidirectional Vision, Camera Networks and Non-classical Cameras. Zaragoza, Spain; 2010:1-7.

    Google Scholar 

  8. Geyer C, Daniilidis K: A unifying theory for central panoramic systems and practical applications. In Proceedings of the 6th European Conference on Computer Vision-Part II, ECCV ’00. London, UK; 2000:445-461. http://dl.acm.org/citation.cfm?id=645314.649434

    Google Scholar 

  9. Barreto JP, Araujo H: Geometric properties of central catadioptric line images and their application in calibration. IEEE Trans. Pattern Anal. Mach. Intell 2005, 27: 1327-1333. http://dl.acm.org/citation.cfm?id=1070616.1070819

    Article  Google Scholar 

  10. Ying X, Hu Z: Catadioptric camera calibration using geometric invariants. IEEE Trans. Pattern Anal. Mach. Intell 2004, 26(10):1260-1271. 10.1109/TPAMI.2004.79

    Article  Google Scholar 

  11. Ying X, Zha H: Simultaneously calibrating catadioptric camera and detecting line features using Hough transform. In 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2005. (IROS 2005). Alberta, Canada; 2005:412-417.

    Google Scholar 

  12. Vasseur P, Mouaddib EM: Central catadioptric line detection. In British Machine Vision Conference. Kingston University, London, UK; 2004.

    Google Scholar 

  13. Mei C, Rives P: Single view point omnidirectional camera calibration from planar grids. In IEEE International Conference on Robotics and Automation, 2007. Roma, Italy; 2007:3945-3950.

    Chapter  Google Scholar 

  14. Puig L, Bastanlar Y, Sturm P, Guerrero JJ, Barreto JA: Calibration of central catadioptric cameras using a DLT-Like approach. Int. J. Comput. Vis 2011, 93: 101-114. 10.1007/s11263-010-0411-1

    Article  Google Scholar 

  15. Deng XM, Wu FC, Wu YH: An easy calibration method for central catadioptric cameras. Acta Automatica Sinica 2007, 33(8):801-808.

    Article  Google Scholar 

  16. Gasparini S, Sturm P, Barreto J: Plane-based calibration of central catadioptric cameras. 2009 IEEE 12th International Conference on Computer Vision 2009, 1195-1202.

    Chapter  Google Scholar 

  17. Wu F, Duan F, Hu Z, Wu Y: A new linear algorithm for calibrating central catadioptric cameras. Pattern Recogn 2008, 41(10):3166-3172. 10.1016/j.patcog.2008.03.010

    Article  Google Scholar 

  18. Wu Y, Hu Z: Geometric invariants and applications under catadioptric camera model. In Tenth IEEE International Conference on Computer Vision, 2005. ICCV 2005. Beijing, China; 2005:1547-1554.

    Google Scholar 

  19. Scaramuzza D, Martinelli A, Siegwart R: A toolbox for easily calibrating omnidirectional cameras. In Proceedings of the 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems. Beijing, China; 2006:5695-5701.

    Google Scholar 

  20. Scaramuzza D, Martinelli A, Siegwart R: A flexible technique for accurate omnidirectional camera calibration and structure from motion. In IEEE International Conference on Computer Vision Systems, 2006 ICVS ’06. New York, USA; 2006:45-45.

    Google Scholar 

  21. Lei J, Du X, Zhu YF, Liu JL: Unwrapping and stereo rectification for omnidirectional images. J. Zhejiang Univ. SCI. A 2009, 10(8):1125-1139. http://www.springerlink.com/index/10.1631/jzus.A0820357 10.1631/jzus.A0820357

    Article  Google Scholar 

  22. Micusik B, Pajdla T: Autocalibration 3D reconstruction with non-central catadioptric cameras. In Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2004, CVPR 2004. Washington, DC, USA; 2004:58-65.

    Chapter  Google Scholar 

  23. Micusik B, Pajdla T: Structure from motion with wide circular field of view cameras. IEEE Trans. Pattern Anal. Mach. Intell 2006, 28(7):1135-1149.

    Article  Google Scholar 

  24. Agrawal A, Taguchi Y, Ramalingam S: Analytical forward projection for axial non-central dioptric and catadioptric cameras. In Proceedings of the 11th European conference on computer vision conference on Computer vision: Part III. ECCV’10, Springer-Verlag, Berlin, Heidelberg; 2010:129-143.

    Google Scholar 

  25. Derrien S, Konolige K: Approximating a single viewpoint in panoramic imaging devices. In Proceedings of IEEE Workshop on Omnidirectional Vision, 2000. South Carolina, USA; 2000:85-90.

    Google Scholar 

  26. Shabayek A, Morel O, Fofi D: Auto-calibration and 3D reconstruction with non-central catadioptric sensors using polarization imaging. In The Proceedings of the 10th Workshop on Omnidirectional Vision (OMNIVIS) in conjunction with Robotics Systems and Science RSS. Zaragoza, Spain; 2010-2010.

  27. Gaspar J, Santos-Victor J: Visual path following with a catadioptric panoramic camera. In Proceedings of the International Symposium on Intelligent Robotic Systems - SIRS’99. Coimbra, Portugal; 1999. http://citeseerx.ist.psu.edu/viewdoc/summary?http://dx.doi.org/10.1.1.33.3379

    Google Scholar 

  28. Winters N, Gaspar J, Lacey G, Santos-Victor J: Omni-directional vision for robot navigation. In Proceedings of the IEEE Workshop on Omnidirectional Vision. South Carolina, USA; 2000:21-28.

    Chapter  Google Scholar 

  29. Jeng SW, Tsai WH: Using pano-mapping tables for unwarping of omni-images into panoramic and perspective-view images. IET Image Process 2007, 1(2):149-155. 10.1049/iet-ipr:20060201

    Article  MathSciNet  Google Scholar 

  30. Geyer C, Daniilidis K: Catadioptric camera calibration. Kerkyra, Greece; 1999.

    Book  Google Scholar 

  31. Geyer C, Daniilidis K: Paracatadioptric camera calibration. IEEE Trans. Pattern Anal. Mach. Intell 2002, 24(5):687-695. 10.1109/34.1000241

    Article  Google Scholar 

  32. Zhang Z: A flexible new technique for camera calibration. IEEE Trans. Pattern Analy. Mach. Intell 2000, 22(11):1330-1334. http://dx.doi.org/10.1109/34.888718 10.1109/34.888718

    Article  Google Scholar 

  33. Duda RO, Hart PE: Use of the Hough transformation to detect lines and curves in pictures. Commun. ACM 1972, 15: 11-15. 10.1145/361237.361242

    Article  Google Scholar 

  34. Swaminathan R, Grossberg MD, Nayar SK: Non-single viewpoint catadioptric cameras: geometry and analysis. Int. J. Comput. Vis 2001, 66(3):211-229.

    Article  Google Scholar 

  35. Hicks RA, Bajcsy R: Reflective surfaces as computational sensors. Image Vis. Comput 2001, 19(11):773-777. http://linkinghub.elsevier.com/retrieve/pii/S0262885600001049 10.1016/S0262-8856(00)00104-9

    Article  Google Scholar 

  36. Harris C, Stephens M: A combined corner and edge detector. In Proceedings of Fourth Alvey Vision Conference. Manchester, UK; 1988:147-151.

    Google Scholar 

  37. Hartley RI, Zisserman A: Multiple View Geometry in Computer Vision. Cambridge University Press, Cambridge; 2004.

    Book  Google Scholar 

Download references

Acknowledgements

This research project is funded by the Ministry of Higher Education (MOHE), Malaysia, under a Fundamental Research Grant Scheme (FRGS/2/2010/TK/SWIN/03/02). N. S. Chong also thanks the Swinburne University of Technology (Sarawak Campus) for his Ph.D. studentship.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mou Ling Dennis Wong.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

Authors’ original file for figure 1

Authors’ original file for figure 2

Authors’ original file for figure 3

Authors’ original file for figure 4

Authors’ original file for figure 5

Authors’ original file for figure 6

Authors’ original file for figure 7

Authors’ original file for figure 8

Authors’ original file for figure 9

Authors’ original file for figure 10

Authors’ original file for figure 11

Authors’ original file for figure 12

Authors’ original file for figure 13

Authors’ original file for figure 14

Authors’ original file for figure 15

Authors’ original file for figure 16

Authors’ original file for figure 17

Authors’ original file for figure 18

Authors’ original file for figure 19

Authors’ original file for figure 20

Authors’ original file for figure 21

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License ( https://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Chong, N.S., Kho, Y.H. & Dennis Wong, M.L. A closed form unwrapping method for a spherical omnidirectional view sensor. J Image Video Proc 2013, 5 (2013). https://doi.org/10.1186/1687-5281-2013-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-5281-2013-5

Keywords