🍰
Grasshopper ES por BetweenRealities
  • Using and Generating Documentation
    • GitHub
    • Discord
      • Speckle Webhooks
    • Speckle Client
  • Potencial technological implementations
  • 🧞Compute
    • Introducción a Grasshopper
      • Customizar Entorno
        • VSCode
      • Rhinoceros 3d
      • Hops
      • Galapagos
    • Modelos Informados
      • Comportamiento Estructural
        • Automatizar cálculo Python
      • OOP
      • Rhino Inside Revit
        • Revit
          • Modelado 3d en Revit
          • Certificación profesional Revit
      • Energía
    • Procesos Urbanos
      • Automatizar Qgis
      • Librerías Python
      • Librerías Grasshopper
      • Stable Diffusion
    • Programación
      • RhinoPython
        • Anatomía del script
        • Python básico
        • Tuples, listas y diccionarios
        • Operadores y funciones
        • Ejecución condicional
        • Geometría
        • Clases
      • Multithread
  • 🪅Database
    • Lenguaje Python
      • Types and operations
      • Statements and syntax
      • Function and generators
      • Modules and packages
      • Classes and OPP
      • Exception and tools
      • Advance topics
    • Análisis de la Información
      • Comparison with SQL
      • Comparison with R / R libraries
      • Pandas
    • Abrir Acceso
      • Rest API Abierta
    • Blockchain Descentralización
  • 🕸️COLLECT
    • Captura de Datos
      • Raspberry Pi
        • Encendido y apagado automático
      • Arduino
      • UAS
      • Fotogrametría
        • Crashes
    • Técnicas Machine Learning
      • Clasificación
      • Computer Vision
    • Computación en la Nube
      • Contenedores
      • Azure
      • Ubuntu Server Deploy
      • PostgreSQL Server
      • Rhino Compute Deploy
  • 🍭INTERACT
    • Introducción a Unreal Engine
      • Ejecutar Python
      • Datasmith
      • Materiales
        • Crear PBR
        • Materiales Introducción
      • Iluminación
        • Iluminación Introducción
        • Raytraced Iluminación Cinemática
      • Assets Management
    • Interacción Inmersiva
      • Blueprints
        • Blueprints estandar
        • Blueprints Introducción
        • Diseño Nivel
      • Packaging
      • Performance
    • Interfaces Bidimensionales
Con tecnología de GitBook
En esta página
  • ¿Que es Reality Capture?
  • Photos and setup
  • Reflex camera settings
  • Improve taken images
  • Initial settings and CLI
  • Point cloud
  • Import laser scan
  • First model: Point cloud
  • Update model
  • Model
  • Model texturize
  • Mapping

¿Te fue útil?

  1. COLLECT
  2. Captura de Datos

Fotogrametría

¿Que es Reality Capture?

Reality Capture is a technology that uses digital imaging and processing techniques to create 3D models and digital representations of physical objects and environments. It is a rapidly growing field that is used in a wide range of industries and applications, including architecture, engineering, construction, film and entertainment, forensics, and more.

Reality Capture technology typically involves the use of specialized hardware and software tools to capture, process, and analyze data from digital images and other sources. This can include scanners and cameras that capture high-resolution images and measurements of an object or environment, as well as software tools that process and integrate the data to create 3D models and other digital representations.

Reality Capture technology has a number of advantages over traditional modeling and surveying methods. It allows for the creation of highly accurate and detailed models and representations, and can be used to capture objects and environments that are difficult or impossible to measure manually. It also allows for the creation of interactive and immersive experiences, and can be used to create virtual and augmented reality applications.

Overall, Reality Capture is a powerful and versatile technology that is enabling new ways of creating and using digital representations of physical objects and environments. It is transforming a wide range of industries and applications, and is helping to drive innovation and productivity in many areas.

Photos and setup

Reflex camera settings

From 24-48mpx for higher resolution. 24-35mm (full-frame equivalent) avoiding change of focal length during shootage. Shutter speed at least souble your focal length (avoid camera shake) and a ISO 100-800. Use an aperture f/8-f/12 for get deep enought deep of field with much of the frame sharpen focus. Turn off auto white balance and shooring during overcast days for avoid harsh direct light. If you can shoot in RAW and convert late in jpeg.

Improve taken images

Take images: Overlapp images, concentric circles and radial axis. Same position takes different directions (camera photos not drones) ot with various cameras for speed up the process. Proccess images: Even lighting. Remove lens distorsion. Flatten images killing shadows and hightlights. Make a macro in lightroom -can use photoshop (Highlight=-100, shadows=100), SyncSettings and export as jpeg.

Merging exteriors and interiors: Incresase overlapping when change from exterior to interior. Make photos from interior and exterior of the 'intersticio'. Also add contorl points with features or use real markers.

Initial settings and CLI

Naming conventions: AGR (aerial grid), ARN (aerial ring), AOR (Aerial orthos), PEX (photogrammetry exterior), PIN (photogrammetry interior). Sample file example: 'SiteName_ADR_01_001.jpg'

Select layout (3 tabs) Laser scan and photogrammetry process description: Transfer texture from model make only with photogrammetry to a model make only with laser scan.

Import global settings: Workflow -> Application -> Settings (Export global settings). You can change and save individual settings sections and save it (aligment, reconstruction settings...). Modify global settings: Workflow -> Add imagery -> Settings (ImageOverlap=Low, DistorsionModel='Brow3 with Tangential2' for avoid 'banana effect' with drone images). AddInput photos and folder or drop in canvas.

  • Group by lens: Drone and camera photos uses the same lens (28mm in this case) but its possible to use different lens. Automaticaly RC group that images in Workflow -> Application -> Settings (GroupCalibrationByExif).

Filter by batch: Alignment -> Export: ImageList for later select grouped images (export a ImageList for each different images sets from different cameras)

Settings with various devices: scan input (meshing=True, texturing=False), terrestial input (meshing=False, texturing=True), aerial input (meshing=True, texturing=True)

Point cloud

Import laser scan

Import point cloud from laser scan:

First model: Point cloud

Align data: Alignment -> Registration: AlignImages result sparse point cloud (components). If you modify distance base on control points, you have to re-align images.

  • Automate process: CLI script 'Aligment_advance.bat':

  • Reasons for unalignment result: different camera types, diff focal lenghts, not enough overlpap between images, not enought overlap between two components. Export parts: Aligment -> Export -> Registration for export all setup of specific file part (.rcalign file extension).

Join parts: Aligment -> Registration: If there is a big project with different device taking data. Save each part in rcalign format and import all in one project. Then MergeComponent (FeatureSource='Use component feature', in this case is the best option).

Unused images: View -> Display: Unregistred images that wasnt align (blur errors) Path drone: Alignment -> Analyzer: Inspect for view camera paths. Check conectivity and black holes. Align with control points: Tools -> Registration: ControlPoints for a better alignment of the images (search for characteristics spots). DefineDistance from control points.

  • Also you can use drone georeference data or load the FlightLog file.

  • If the misaligment persist change Weight of the ControlPoints (50 to 100).

If there is laser scan with GroundControlPoints (ImportGroundControl txt file index-long-lat-z), disable GPS from drone images (less precise). Select cameras (PriorPose='Unkhown'). Assing control points and align drone images with that. Finally SuggestMeasurement from input data for align preciselly photos and control points.

Update model

Update model: Aligment -> Registration: Update model with new changes. Create another component. Set ground plane: Tools -> Scene alignment SetGroundPlane with the help of the TopView and LeftView (View -> View camera). SetReconstructionRegion for delimit interest object. Filter by clippingBox: View -> View tools: ClippingBox (from ReconstructionRegion) for visualice full surface (limit 4M polygons)

  • Filter selection: Tools -> Mesh model: Advance and SelectLargestComponentSelected and Invert. Ctrol+Click for add into the selection. Then select FilterSelection for hide it from model.

Normal model: Mesh Model -> Create model, NormalDetail for construct the mesh model.

  • If just want normal model for specific clipping part select all the cameras not present in the zone (select camera and invert selection and EnableInComponent=Disable)

  • Get various model in Mesh Model -> Create model -> Settings (ForceSinglePartMode=No)

Reduce triangle count: Reconstruction -> Selection -> Advance (SelectMarginalTriangle, SelectLargerComponentConnected -invert-, SelectLargeTriangles -change EdgeThresholdMultiplier-) and FilterSelection. Get the unused triangles, floating meshes and large triangles.

Model

Model texturize

Settings mesh: Mesh model -> Mesh model & texture: Settings for max specs (MaximalTexelResolution=16K-8K, Style=FizedTexelSize, TexelSize=Optimal-0.002). Unwrap model and Texturing model. Simplify mesh: Tools -> Mesh model: Simplify tool (Type=Absolute, TargetTriangleCount=fix number, ColorReprojection=True, NormalReprojection=True, MaximalTextureCount=Custom=1)

  • Display mesh: View -> Scene render: Sweet mode for display entire model

Export mesh: Workflow -> Output: Export (ExportVertexNormal=True, ExportVertexColor=False because you are using the texture, TransformationPreset=Unreal) Export LODs: Reconstruction -> Tools -> ExportLODs export level of detail from the model (ModelCount, Max/MinTrianguleCount).

Optimize models: Slit scene in groups (groups that need simplify and group that mantains detail). In the example use Meshmixer for simplify mesh with brushes (Sculpt -> Brushes).

Mapping

Classify model: Reconstruction -> Tools -> AIClassification Ortho projection map: Reconstruction -> Tools -> Ortoprojection. You can switch between image, digital surface model and a digital terrain model. You can export it in Reconstruction -> Export Export Cesium 3d tiles: Reconstruction -> Tools -> ExportLODs or Workflow -> Export -> Share

Winter Environment using RealityCapture, Quixel Mixer and Twinmotion
Photogrammetry workflow with Tim Hanson, RealityCapture & CG Society
RealityCapture tutorial: Merging exteriors and interiors of buildings
RealityCapture Free Webinar: Advanced workflow for a combination of images and laser scans (min55)
RealityCapture tutorial: New mapping features RC Blaze 1.1
Optimizing RealityCapture 3D models for Game Engines (Unreal Engine)

Component workflow in RealityCapture by CyArk
Reconstruction workflow in RealityCapture by CyArk
Texturing workflow inside of RealityCapture by CyArk
RealityCapture to UE5 - Workflow Tutorial
AnteriorUASSiguienteCrashes

Última actualización hace 2 años

¿Te fue útil?

🕸️