Keiya SATOH Study on Use of Omnidirectional Camera to Determine the Shape of Earthquake-Damaged Structures Takaaki IKEDA, Masataka SHIGA In recent years, three-dimensional (3D) city models have been widely created, and increasingly used in various research and practical applications for disaster prevention and mitigation. One method for generating structure models, which are a component of 3D city models, is to use images captured from multiple angles and apply Structure from Motion (SfM) and Multi-View Stereo (MVS) analysis to generate point cloud and mesh data. However, immediately after a disaster, the time and scope of on-site investigation is severely limited. Under such constrained conditions, further improvements in model quality and reductions in processing time require prior evaluation of factors that influence these aspects. In this study, to enhance the efficiency of photography, we propose using an omnidirectional camera instead of a conventional frame camera. In principle, a omnidirectional camera is more efficient than a frame camera in that it has no blind spots and can significantly reduce the time required for photographing compared to a frame camera. However, images captured with omnidirectional cameras encompass a full 360° view in a single image, resulting in significant distortion at the periphery. The impact of this distortion on the accuracy of point clouds generated through SfM-MVS analysis has not been fully examined. In addition, when verifying the accuracy in a real environment, the results from another measurement method must be treated as the correct values, making comparison with the true values practically impossible. Given these challenges, this study reconstructs a virtual space with a low-rise residential area, where true values are known. Using images captured with an omnidirectional camera, we generate 3D point clouds of structures through SfM-MVS analysis. By comparing the 3D coordinates of the generated point cloud with the predefined true values in the virtual space, we aim to evaluate the factors that affect point cloud accuracy. The target area for this study assumes a region where low-rise houses have subsided and tilted due to liquefaction caused by an earthquake. The parameters evaluated include camera height, the number of ground control points, the spacing between photographs, and image resolution. The results of the analysis indicate that the relationship between camera height and point cloud accuracy could not be clearly determined. Regarding the number of ground control points, accuracy improved as the number of ground control points increased, but the impact on accuracy was relatively smaller compared to the spacing between photographs and image resolution. In terms of photograph spacing, the smallest errors were observed in the order of 0.5m, 2.0m, and 1.0m, rather than showing a simple trend of decreasing error with shorter spacing. For image resolution, higher resolution images resulted in better point cloud accuracy. Furthermore, image resolution had the most significant impact on accuracy compared to the number of ground control points and the spacing between photographs.