Özet

A Field Primer for Monitoring Benthic Ecosystems Using Structure-From-Motion Photogrammetry

Published: April 15, 2021
doi:

Özet

We provide a detailed protocol for conducting underwater structure-from-motion photogrammetry surveys to generate 3D models and orthomosaics.

Abstract

Structure-from-motion (SfM) photogrammetry is a technique used to generate three-dimensional (3D) reconstructions from a sequence of two-dimensional (2D) images. SfM methods are becoming increasingly popular as a noninvasive way to monitor many systems, including anthropogenic and natural landscapes, geologic structures, and both terrestrial and aquatic ecosystems. Here, a detailed protocol is provided for collecting SfM imagery to generate 3D models of benthic habitats. Additionally, the cost, time efficiency, and output quality of employing a Digital Single Lens Reflex (DSLR) camera versus a less expensive action camera have been compared. A tradeoff between computational time and resolution was observed, with the DSLR camera producing models with more than twice the resolution, but taking approximately 1.4-times longer to produce than the action camera. This primer aims to provide a thorough description of the steps necessary to collect SfM data in benthic habitats for those who are unfamiliar with the technique as well as for those already using similar methods.

Introduction

Ecosystem processes are naturally dynamic and can be difficult to quantify. The past decade has seen a surge in new technologies for capturing ecosystems and their dynamics in a range of scales from 3D laser scanning of individual ecosystem features to satellite remote sensing of large areas1,2,3. In benthic habitats, structure is intimately connected with ecosystem function8, making tools that simultaneously allow for monitoring geometry and community structure especially valuable for understanding ecological dynamics. However, many modern approaches cannot be used in aquatic systems due to the physical properties of water (e.g., refraction, distortion, turbidity). Techniques, such as LiDAR (Light Detection and Ranging) and some aerial survey methods, may be appropriate on large spatial scales, but cannot acquire the resolution needed to assess fine scale changes in benthic habitats. Structure-from-Motion (SfM) photogrammetry methods have recently been adapted to produce large-scale, high-resolution orthomosaics and 3D surface models of underwater habitats4,5,6,7.

SfM photogrammetry is a relatively low-cost, simple, non-invasive, and repeatable method that allows for the generation of large-scale, high-resolution records of the benthic environment in aquatic ecosystems9. SfM uses a sequence of 2D images to generate 3D model reconstructions. The models generated from SfM can be used to collect data on the structural complexity (e.g., rugosity, dimensionality)4,5,10,11,12 and community structure (e.g., species composition, population demography)13,14,15 of benthic ecosystems. Furthermore, as this method is relatively inexpensive, quick, and repeatable, it can be used by both scientists and non-scientists to gather valuable, objective information on these ecosystems. Therefore, this method is a viable technique for use in citizen science projects where standardization of sampling effort, minimization of bias, engagement of participants, and ease of training are vital to the quality of data and overall success16,17.

This article provides a detailed protocol for conducting underwater SfM surveys. Simultaneously, the use of a DSLR camera has been compared with that of a more cost-effective 'action camera', and the relative advantages and disadvantages of each are outlined. The overall objective is to familiarize scientists and non-scientists with benthic SfM survey methods as rapidly as possible by providing a simple, commonly used protocol, in turn, promoting the use of this method more widely.  For examples of studies that have applied variations of this method to study underwater ecological communities, see Burns et al. (2015)4, Storlazzi et al. (2016)18, Ventura et al (2016 and 2018)19,20, Edwards et al. (2017)14, George et al. (2018)21, Anelli et al. (2019)22, and Torres-Pulliza et al. (2020)10.

The method described here requires a two-person snorkel or SCUBA team. After the survey site is selected, a spool of line (Figure 1A) is placed at the center of the site, and calibration tiles (Figure 1B) are distributed ~2 m from the center. One person (the swimmer) swims with the camera and captures images of the site, while the second person (the assistant) tends the spool in the center of the plot (Figure 1C). First, the swimmer connects the camera to the spool via the line and then begins to take continuous pictures of the benthos while swimming face-down and forward to unwind the line off the spool. The swimmer should maintain a vertical distance of ~1 m above the substrate at all times, adjusting their position to match that of the topography as they swim. Importantly, the line connecting the spool and camera should remain taut at all times to create even spacing in the spiral as the swimmer surveys the plot. The assistant maintains the spool in a stable, upright position and ensures that the spool does not rotate, and that the line does not become tangled.

Once the line has been completely unwound, the swimmer stops, turns, and swims in the opposite direction to recoil the line around the spool. As the swimmer switches directions, the assistant turns the spool to wind the line in, exactly 180° to prevent exact overlap of the outgoing path. Once the swimmer is as close to the center as possible, the camera is detached from the line, and the assistant takes the spool and line and swims away from the central portion of the site. The swimmer then finishes imaging the center of the plot by moving the camera in a small spiral over the center. While there are several ways to image an area effectively, the spool-and-line method described here is robust in even non-ideal environmental conditions where choppy surface waters, swell, or low visibility might otherwise impede data collection. In these scenarios, this method keeps snorkelers/divers attached and ensures high overlap of images by keeping the swimmer on a controlled path.

Protocol

1. Materials

  1. Camera
    1. Ensure minimum specifications of durability and waterproof nature (or a waterproof housing) and a minimum frame rate of 2 frames/s (fps).
      NOTE: A minimum frame rate of ~4 fps was used in this example.
    2. Digital Single Lens Reflex (DSLR) camera
      1. Set the camera to shoot continuously at a photo capture rate between 2 fps and 5 fps.
      2. To reproduce the protocol described for this example, use a camera in an underwater housing (see Table of Materials) with the following settings: Manual Mode (M); f10, 18 mm; shutter speed = 1/320; exposure compensation = -1/3; image quality = highest, no RAW; drive mode = continuous; autofocus = AI SERVO; ISO = Auto, max3200; file numbering = Auto reset; image auto rotate = Off; time/date = UTC.
    3. Action camera
      1. Set to video mode or continuous shooting mode at the highest resolution and frame rate possible.
        NOTE: The action camera can also be used in continuous mode as long as the frame rate is 2 images per second or greater.
      2. To reproduce the protocol in this example (see Table of Materials), use a waterproof action camera with the following settings: Video resolution = 4K (4:3 aspect ratio); frame rate = 30 fps.
        NOTE: For action cameras, it may be easier to attach the line from the spool to the swimmer rather than to the camera. In this example, the line was attached to the swimmer's wrist via a small lanyard.
  2. Spool rig (Figure 1A)
    1. Ensure that the spool is of the appropriate size to hold the length of line needed for the survey site radius.
      NOTE: The circumference of the spool controls the spacing of the spiral swim lines, and the length of the line determines the sample area. In this example, an ~8 inch (~20 cm) diameter spool was used for ~50 inch (~1.3 m) spacing of swim lines. See 9 for details.
    2. Select a spool rig with a flanged edge (for smoothly guiding the line on and off the spool) and attachment points for a handle and pole (to control height from substrate). Ensure that the spool rig is inherently negatively buoyant or made so with the addition of weights.
      NOTE: In this example, polyvinyl chloride (PVC) pipes for the handle and pole were used, and the spool was 3D printed in polylactic acid plastic. However, the spool can be as simple as a large PVC pipe or any other round object with the desired diameter.
      1. For frequent use and/or challenging field conditions, select a spool made of a more durable material such as aluminum.
      2. Make sure that the spool does not rotate on the pole or spin when in use.
    3. Fix the line to the spool at one end and to a detachable clip at the other for connecting to the camera.
      NOTE: The length of the line defines the radius of the site. Here, 6 m of line was used for sites of 12 m in diameter.
  3. Calibration tiles
    1. Although specialized calibration tiles are not necessary, ensure that negatively buoyant, recognizable objects of known size are included in the model for scale. Consider surge and current conditions to ensure that suitable materials are used, so that tiles remain stationary during photo collection.
      NOTE: Here, scale marker templates available as part of certain software programs were printed on waterproof paper, which was attached to 1-inch-thick PVC tiles.
    2. Divers will need a means to measure the depth of the tile. In our example, we use an electronic depth gauge (see Table of Materials).
  4. Color correction
    1. Set white balance on the camera to custom. Take a photo of an 18% grey card or white dive slate underwater before the start of every SfM survey. Do this every time a new site is started.
      NOTE: The photo will allow for color correction and will also help to separate the downloaded images from different sites when conducting multiple surveys on the same day.

2. Detailed methods

  1. Site selection
    1. Select a site that has enough room to swim the entirety of the spiral pattern (~113 m2 in this example). In addition to the area being surveyed, incorporate a small buffer area to ensure that the entire survey area is sufficiently photographed to yield high-quality data.
    2. Consider the ability and equipment of the two-person team. Shallow sites (< ~2 m) can be surveyed on snorkel, whereas deeper sites may require SCUBA.
  2. If planning to repeatedly survey the site regularly, mark the center point, where the spool rig will be placed, with a tag or a permanent structure (e.g., rebar or cinder block). At the very least, take a global positioning system coordinate so that the site can be relocated with assistance from a printout of the orthomosaic.
    NOTE: Permanent underwater structures typically require a permit.
  3. Prepare the site.
    1. Set the spool in the middle of the site.
    2. Set out calibration tiles and record their depths. Place calibration tiles face-up, ~2 m away from the center.
      NOTE: In this example, 3 calibration tiles were placed in a triangle around the center of the site. Calibration tiles should be appropriately weighted and positioned to ensure minimal movement during the collection of the photos.
  4. Instruct the swimmer to swim with the camera while the assistant tends the spool.
    1. The assistant sets the pole and the attached spool upright in the center of the selected site and holds the spool rig upright and stationary.
    2. Ensure that the swimmer attaches the side of the camera closest to the spool to the line and holds the camera facing straight down ~1 m from the benthos.
      NOTE: If the swimmer must tilt the camera, try to make sure that it is tilted slightly forward rather than backwards to avoid collecting images in the swimmer's shadow. Tilting the camera slightly forward for both the outward spiral and the return spiral may also capture better angles of the benthos and produce better models, especially when there are overhangs and holes.
    3. Once the camera is properly positioned, the swimmer begins taking continuous images of the benthos while swimming forward and maintaining tension on the line.
    4. Ensure that the swimmer continues to swim in a spiral at a consistent speed while taking photographs until the line is completely unwound from the spool.
      NOTE: The swimmer should try to stay a constant distance of ~ 1 m above the benthos and swim the spiral at a moderate pace to ensure sufficient overlap between images. When in doubt, slower is better.
    5. In highly rugose environments (e.g., coral reefs), include a third worker (second assistant) who can prevent line entanglement by hovering above the center of the line and gently lifting it over obstacles.
    6. When the line is completely unspooled, the swimmer reverses directions, reattaching the camera if necessary, and swims the camera in the opposite direction to begin re-winding the line back onto the spool while taking pictures. NOTE: Swimming the reverse spiral is not absolutely necessary, but will typically produce better models.
    7. If a single spiral method is desirable to save time, then the swimmer would detach the line from the camera and skip to step 2.4.12 while the assistant winds the line and removes the spool rig from the site.
    8. As soon as the swimmer begins to swim in the opposite direction the assistant rotates the spool to wind the line in ½ of a turn (180°) against the new swimming direction. This ½ turn ensures that the swimmer’s return path is offset from the original path to yield greater photo coverage of the site.
    9. Ensure that the swimmer continues to take pictures and swim the reverse spiral until the line is almost completely rewound around the spool.
    10. When the swimmer’s and assistant’s spacing prevents further progress, the swimmer will then stop taking pictures to detach the camera from the line and allow the assistant to remove the spool rig from the center of the site.
    11. Once the spool is removed from the site, the swimmer images the center of the site by holding the camera facing straight down and moving the camera in a small spiral pattern over the center of the site.

3. Clean up the site.

  1. Pick up calibration tiles and any other equipment before departing the site.
    NOTE: Never leave trash or equipment at a site. Always leave a site cleaner than you found it.

Representative Results

In this example, Reef Site 2­­_7 located on Patch Reef 13 in Kāneʻohe Bay, Oʻahu, Hawaiʻi, was imaged, and 3,125 JPEG photos from the DSLR and 3,125 JPEG frame captures from the action camera video (Table 1) were used as input to create the orthomosaics and 3D models. The general workflow consisted of 5 stages: 1) alignment of photos to generate the sparse point cloud, 2) scaling the sparse point cloud and optimizing cameras, 3) building the dense point cloud (depth maps were also generated during this stage), 4) building the digital elevation model (DEM) and orthomosaic, and 5) generating the 3D model and texture. Note that stages 4 and 5 do not necessarily need to be done in that order, but they must be performed after processing the dense point cloud and depth maps. Georeferencing the models should occur prior to generating the orthomosaic and DEM. The settings used for these stages and processing details are outlined in Table 2 and Table of Materials, respectively.

For more detailed methods of how to generate 3D models and orthomosaics see the Supplementary Material and Suka et al.23. Processing time was shorter for the action camera-derived model for every step including sparse point cloud generation, dense point cloud generation, mesh model rendering, and textured model rendering. This led to a significantly faster overall processing time for the action camera model (6 h 39 min) than the DSLR model (9 h 14 min). The exact time for model processing will vary with computational power and specific hardware configurations.

The model generated using images from the DSLR camera contained 2,848,358 sparse cloud points and 787,450,347 dense cloud points while the model generated from the action camera images contained only 2,630,543 sparse cloud points and 225,835,648 dense cloud points. This led to the DSLR models having more than 2x the resolution than the action camera models with orthomosaics resolutions of 0.442 and 0.208 mm/pixel for the DSLR- and action camera-derived models, respectively (Table 1). Despite the better resolution of the DSLR model relative to the action camera model, both methods were able to produce high-quality models with little difference in visual representation when the ~113 m2 reef area was represented as a 20 cm2 digital elevation model (Figure 2 top panels) or 2D orthomosaic projection (Figure 2 middle panels).

Figure 1
Figure 1: Structure-from-Motion photogrammetry. (A) Example of a spool rig for controlling swimmer distance with an attached handle and pole for precise positioning and handling. (B) Calibration tiles. (C) A schematic of the swim path with relative positions of the swimmer (green) and the assistant (orange). Please click here to view a larger version of this figure.

Figure 2
Figure 2: Visual comparison of digital elevation models and orthomosaics. Digital elevation models (top) and orthomosaics (middle) constructed from DSLR (left) and action camera (right) images. The bottom panel is a zoom of the areas in the white boxes in the orthomosaics. The heatmap scales in the top panel represent distance from the surface of the water in meters (m). Please click here to view a larger version of this figure.

Canon EOS Rebel SL3 GoPro Hero 7
Cost
Camera ~$600.00 ~$220.00
Underwater housing ~$1,700.00 NA
Total Cost ~$2,300.00 ~$220.00
Photos
Photo file format jpeg jpeg
Photo resolution 24 Megapixels 12 Megapixels (from 4K video)
Aligned photos / total photos 3125 / 3125 3125 / 3125
Photogrammetry metrics
Sparse cloud points 2,848,358 2,630,543
Dense cloud points 787,450,347 225,835,648
Faces (3D model) 11,919,451 3,834,651
Digital elevation model (DEM) resolution 0.831 mm/pixel 1.77 mm/pixel
Orthomosaic resolution 0.208 mm/pixel 0.442 mm/pixel
Processing times
Sparse cloud generation 1 h 23 min 1 h 27 min
Dense cloud generation 4 h 3 h 11 min
Mesh model rendering 3 h 32 min 1 h 49 min
Texture rendering 19 min 12 min
Total computer processing time 9 h 14 min 6 h 39 min

Table 1: Detailed information about setup cost, photos used to construct the models, photogrammetry metrics, and processing time. Processing was done using the same settings for both models. Note that processing time does not include time for various steps such as photo editing, extracting images from video, re-aligning photos, and editing and scaling the models.

Canon EOS Rebel SL3 GoPro Hero 7
Images
Average file size ~ 8.3 MB ~ 4.7 MB
Photo acquisition Continuous mode Extracted from 4K video
Color correction Manual Manual
Lens correction No Yes
Photogrammetry Process Settings
Sparse cloud generation Accuracy: High Accuracy: High
Key Point: 40,000 Key Point: 40,000
Tie Point: 4,000 Tie Point: 4,000
Generic Preselection: Yes Generic Preselection: Yes
Dense cloud generation Medium Quality Medium Quality
3D mesh model generation
Source data: Depth Maps Depth Maps
Quality: Medium Medium
Face count: Low Low
Interpolation: Enabled Enabled
Calculate vertex colors: Yes Yes
3D texture generation
Texture type: Diffuse Map Diffuse Map
Source data:  Images Images
Mapping mode: Generic Generic
Blending mode: Mosaic Mosaic
Texture size/count: 4096 / 1 4096 / 1
Digital elevation model (DEM) From Dense cloud From Dense Cloud
Orthomosaic From DEM From DEM

Table 2: Detailed information on collected images and photogrammetric processing. Processing was done using the same settings for both models.

Supplementary Material.  Please click here to download this file.

Discussion

This study demonstrates that both the DSLR camera and the action camera produce models with better than 0.5 mm/pixel resolution in less than 10 h of processing time on a standard desktop computer. The major tradeoff between the DSLR and action camera, aside from cost, is finer resolution versus faster processing time, respectively. However, the reported processing times only include the computational processing. Thus, although the computational time is less for the action camera, there is a significant amount of time (10-20 min) invested in image extraction from the videos that is not required with the DSLR. An alternative is to use the action camera in continuous shooting mode to avoid image extraction. Continuous shooting mode was not used in this example, as the action camera can only shoot at 2 fps, which requires a significantly slower swim-rate to collect enough images to build a complete model. In this regard, there is a tradeoff between longer time in the field using the continuous shooting mode versus longer time on the computer, extracting images, when using video mode.

Advantages of the action camera include affordability and ease of transport and operation underwater. The main advantage of the DSLR is that it produces higher resolution images; hence, DSLR cameras are recommended over action cameras when the former is not cost-prohibitive. The kinds of questions a study seeks to address will also be important in determining the method used. For instance, an action camera might be preferable in environments that are relatively homogenous (e.g., seagrass beds, dead coral/rubble habitats), or where only broad community metrics (such as abundance, diversity) are being assessed over large spatial scales. However, a DSLR camera might be deployed in cases where tracking fine-scale changes in individual organisms or substrates is of interest.

As this is a field method, the model outputs will depend on various environmental factors such as lighting, water clarity, surface conditions, amount of surge, and movement of fish or non-stationary benthic structures (e.g., sea grass). Although there are no absolute thresholds of when it is appropriate to use this method, slightly overcast days with high water clarity, calm surface conditions, and little surge typically produce the best models. Moreover, there is a limit to the minimum depth required for these methods. These methods do not work well under conditions where there is less than 0.5 m of water because of the low overlap between photos and fewer distinguishing features per photo. However, this does highlight another advantage of the action camera, i.e., they are smaller and thus are easier for use at shallower depths. Furthermore, a smaller diameter spool and higher frame rate (or wider-angle lens) can improve image overlap in very shallow conditions9.

Many other data types can be integrated with this approach. For example, orthomosaics have been used to show the spatial density of molecular data (e.g., genes and metabolites) on corals24 and humans25 using the open source software 'ili'26. The same platform could also be used to map the spatial densities of animals, microorganisms, viruses, and/or chemicals in the environment. Other examples have used SfM for annotating benthic species spatially onto orthomosaics using geographic information system software10. Furthermore, the 3D models generated by SfM can be used to estimate habitat characteristics such as rugosity and fractal dimension. Indeed, the methods outlined here were recently used to derive a new geometric theory for habitat surfaces10. Finally, orthomosaics are being used as input surfaces for spatially explicit computational models, allowing for dynamical simulations to be overlaid on the model's 3D surface. Being able to easily generate large images and 3D representations of benthic habitats has allowed marine scientists to address hitherto unimagined questions3.

Overall, here is a detailed protocol for conducting underwater SfM photogrammetry with either DSLR cameras or more cost-effective action cameras. These methods can be used by scientists for a broad range of purposes, from extracting data about benthic ecosystems to developing 3D input surfaces for in silico simulations. However, these protocols can also be used by non-scientists as part of citizen science efforts to gather valuable information on patterns of biodiversity, habitat complexity, community structure, and other ecological metrics.

Açıklamalar

The authors have nothing to disclose.

Acknowledgements

We thank the Paul G. Allen Family Foundation for funding this research and are grateful to Ruth Gates for the inspiration to use technology to help conserve reefs. We also thank NOAA and other collaborators for thoughtful discussion concerning these methods. Lastly, we thank Catie Foley and Patrick Nichols for providing the drone and underwater video of these methods.

We acknowledge the National Fish and Wildlife Foundation as a funding partner in this work.

Materials

Action camera (GoPro Hero7 Black) GoPro Could be any waterproof action camera
Adobe Lightroom Adobe Color correction
Calibration tiles ( flat PVC board cut to size for Agisoft targets. Attach a dive weight underneath if expecting waves) Any negatively buoyant object of known size and color. We recommend using the scale marker templates available from Agisoft Metashape software (v.1.6.0).
DSLR camera (Canon EOS Rebel SL3 ) Canon 3453C002AA Could be any DSLR camera in a underwater housing
Line (plastic clothes line filament) Any negatively buoyant line that is strong enough to withstand field use
Micro SDXC memory card (for GoPro)
Oceanic Veo 2.0 Oceanic Digital depth gauge
SDXC memory card (for DSLR) Any SDXC memory card should work, so long as there is enough space to hold all the pictures necessary to build the model 
Spool (2 inch-long section of 8 inch diameter PVC pipe which was attached to a 3 feet section of 1 inch PVC pipe to form the stem Any negatively buoyant, round object of the desired diameter
Underwater camera housing for DSLR (Ikelite 200DLM/C Underwater TTL Housing) Ikelite 6970.09 Should be the specific water housing for the DSLR make and model
Windows 10 desktop computer with an Intel i9-9900K 8-core CPU, two Nvidia GeForceRTX 2070 SUPER GPUs, and 128 GB of RAM.  Processing 

Referanslar

  1. Levy, J., Hunter, C., Lukacazyk, T., Franklin, E. C. Assessing the spatial distribution of coral bleaching using small unmanned aerial systems. Coral Reefs. 37 (2), 373-387 (2018).
  2. Muller-Karger, F. E., et al. Satellite sensor requirements for monitoring essential biodiversity variables of coastal ecosystems. Ecological Applications. 28 (3), 749-760 (2018).
  3. Dornelas, M., et al. Towards a macroscope: Leveraging technology to transform the breadth, scale and resolution of macroecological data. Global Ecology and Biogeography. 28, 1937-1948 (2019).
  4. Burns, J. H. R., Delparte, D., Gates, R. D., Takabayashi, M. Integrating structure-from-motion photogrammetry with geospatial software as a novel technique for quantifying 3D ecological characteristics of coral reefs. PeerJ. 2015 (3), 1077 (2015).
  5. House, J. E., et al. Moving to 3D: Relationships between coral planar area, surface area and volume. PeerJ. 2018 (6), 4280 (2018).
  6. Carlot, J., et al. Community composition predicts photogrammetry-based structural complexity on coral reefs. Coral Reefs. , 1-9 (2020).
  7. Young, G. C., Dey, S., Rogers, A. D., Exton, D. Cost and time-effective method for multi-scale measures of rugosity, fractal dimension, and vector dispersion from coral reef 3D models. PloS ONE. 12 (4), 0175341 (2017).
  8. Wilson, S. K., Robinson, J. P. W., Chong-Seng, K., Robinson, J., Graham, N. A. J. Boom and bust of keystone structure on coral reefs. Coral Reefs. 38 (4), 625-635 (2019).
  9. Pizarro, O., Friedman, A., Bryson, M., Williams, S. B., Madin, J. A simple, fast, and repeatable survey method for underwater visual 3D benthic mapping and monitoring. Ecology and Evolution. 7 (6), 1770-1782 (2017).
  10. Torres-Pulliza, D., et al. A geometric basis for surface habitat complexity and biodiversity. Nature Ecology & Evolution. 4, 1495-1501 (2020).
  11. Bayley, D., Mogg, A., Koldewey, H., Purvis, A. Capturing complexity: field-testing the use of ‘structure from motion’derived virtual models to replicate standard measures of reef physical structure. PeerJ. 2019 (7), 6540 (2019).
  12. Leon, J. X., Roelfsema, C. M., Saunders, M. I., Phinn, S. R. Measuring coral reef terrain roughness using ‘Structure-from-Motion’ close-range photogrammetry. Geomorphology. 242, 21-28 (2015).
  13. Burns, J. H. R., et al. Assessing the impact of acute disturbances on the structure and composition of a coral community using innovative 3D reconstruction techniques. Methods in Oceanography. 15-16, 49-59 (2016).
  14. Edwards, C. B., et al. Large-area imaging reveals biologically driven non-random spatial patterns of corals at a remote reef. Coral Reefs. 36, 1291-1305 (2017).
  15. Piazza, P., et al. Underwater photogrammetry in Antarctica: long-term observations in benthic ecosystems and legacy data rescue. Polar Biology. 42 (6), 1061-1079 (2019).
  16. Bonney, R., et al. Citizen science: A developing tool for expanding science knowledge and scientific literacy. BioScience. 59 (11), 977-984 (2009).
  17. Dickinson, J. L., Zuckerberg, B., Bonter, D. N. Citizen science as an ecological research tool: Challenges and benefits. Annual Review of Ecology, Evolution, and Systematics. 41 (1), 149-172 (2010).
  18. Storlazzi, C. D., Dartnell, P., Hatcher, G., Gibbs, A. E. End of the chain? Rugosity and fine-scale bathymetry from existing underwater digital imagery using structure-from-motion (SfM) technology. Coral Reefs. 35 (3), 889-894 (2016).
  19. Ventura, D., Lasinio, G. J., Belluscio, A., Ardizzone, G. A low-cost drone based application for identifying and mapping of coastal fish nursery grounds Feeding ecology View project Habitat use of juvenile Diplodus species View project. Estuarine Coastal and Shelf Science. 171, 85-98 (2016).
  20. Ventura, D., Bonifazi, A., Gravina, M. F., Belluscio, A., Ardizzone, G. Mapping and classification of ecologically sensitive marine habitats using unmanned aerial vehicle (UAV) imagery and object-based image analysis (OBIA). Remote Sensing. 10 (9), 1331 (2018).
  21. George, E. E., et al. Relevance of coral geometry in the outcomes of the coral-algal benthic war. bioRxiv. , (2018).
  22. Anelli, M., et al. Towards new applications of underwater photogrammetry for investigating coral reef morphology and habitat complexity in the Myeik Archipelago, Myanmar. Geocarto International. 34 (5), 459-472 (2017).
  23. Suka, R., et al. Processing photomosaic imagery of coral reefs using structure-from-motion standard operating procedures. U.S. Dept. of Commerce, NOAA Technical Memorandum NOAA-TM-NMFS-PIFSC-93. , (2019).
  24. Galtier d’Auriac, I., et al. Before platelets: the production of platelet-activating factor during growth and stress in a basal marine organism. Proceedings of the Royal Society B: Biological Sciences. 285 (1884), 20181307 (2018).
  25. Bouslimani, A., et al. Molecular cartography of the human skin surface in 3D. Proceedings of the National Academy of Sciences of the United States of America. 112 (17), 2120-2129 (2015).
  26. Protsyuk, I., et al. 3D molecular cartography using LC-MS facilitated by Optimus and ‘ili software. Nature Protocols. 13 (1), 134-154 (2018).

Play Video

Bu Makaleden Alıntı Yapın
Roach, T. N. F., Yadav, S., Caruso, C., Dilworth, J., Foley, C. M., Hancock, J. R., Huckeba, J., Huffmyer, A. S., Hughes, K., Kahkejian, V. A., Madin, E. M., Matsuda, S. B., McWilliam, M., Miller, S., Santoro, E. P., Rocha de Souza, M., Torres-Pullizaa, D., Drury, C., Madin, J. S. A Field Primer for Monitoring Benthic Ecosystems Using Structure-From-Motion Photogrammetry. J. Vis. Exp. (170), e61815, doi:10.3791/61815 (2021).

View Video