<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Picture and a thousand Words (PAATWords)]]></title><description><![CDATA[This blog documents deep dives into perception systems—ranging from the mathematics of camera pipelines to hands-on hardware projects like building autonomous 1/16 scale racecars with NVIDIA Jetson. It serves as a professional repository for engineering findings, specializing in the practical application of deep learning and computer vision in production environments.]]></description><link>https://paatwords.com</link><generator>RSS for Node</generator><lastBuildDate>Thu, 09 Apr 2026 05:05:00 GMT</lastBuildDate><atom:link href="https://paatwords.com/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[Building a 1/16 Scale Racecar with Jetson Nano]]></title><description><![CDATA[Introduction
After looking at the RACECAR project from MIT and UPenn, I have always wanted to build something more compact and cheaper than the RACECAR itself. Arrival of Jetson Nano has made it possible to build one. I will be writing a series of bl...]]></description><link>https://paatwords.com/building-a-116-scale-racecar-with-jetson-nano</link><guid isPermaLink="true">https://paatwords.com/building-a-116-scale-racecar-with-jetson-nano</guid><category><![CDATA[Racecar]]></category><category><![CDATA[Computer Vision]]></category><category><![CDATA[research]]></category><category><![CDATA[self-driving cars]]></category><category><![CDATA[Sensor Fusion]]></category><dc:creator><![CDATA[Ajay chandra Nallani]]></dc:creator><pubDate>Mon, 29 Jul 2024 18:41:37 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1721423938593/b03672c0-cf2b-4590-a595-592f39031634.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3 id="heading-introduction">Introduction</h3>
<p>After looking at the RACECAR project from MIT and UPenn, I have always wanted to build something more compact and cheaper than the RACECAR itself. Arrival of Jetson Nano has made it possible to build one. I will be writing a series of blog posts on building this RACECAR. This post will be updated regularly. This racecar will be used as a project to implement various computer vision and deep learning techniques going ahead.</p>
<h3 id="heading-bill-of-materials">Bill of materials</h3>
<p><img src="https://miro.medium.com/v2/resize:fit:1050/1*9_xZssggyQTT2KzIvSq67w.png" alt /></p>
<p>Traxxas 1/16 slash 4x4 is being used as the chassis for the RACECAR. The car comes with the battery included. I upgraded slash with shock springs in both front and back to support the weight that will be added to the car. I am planning to use the ESC that came with the slash. The bill of materials is without sensors is costing around $420.</p>
<p><img src="https://miro.medium.com/v2/resize:fit:1050/1*TcRGRy90p-DRiGttDtZ4zg.png" alt /></p>
<p>The above circuit is the one used to operate Servo and ESC using Arduino. And Arduino will be installed as an ROS node to Jetson Nano. Arduino will receive 5V power over USB and Servo will receive power from ESC header which will be powered from the car battery.</p>
<p>I have replaced the board of jetson nano with leopard imaging connector board for Jetson Nano. This will enable the project to use 4 cameras using CSI2 connectors which will be pursued later in the project. This project focuses on building on entire camera pipeline from scratch leveraging all the hardware accelerators available on Jetson Nano. Future scope also includes a bigger variant with more compute and more hardware accelerators. Current plan is to use multiple Jetson Nano in parallel to create a computer cluster.</p>
<p>Experimentation includes multiple sensors including Camera(RGB and TOF), Lidar, ultrasonics and IMU. To construct the body to install sensors, I have used Acrylic sheets. Cutting them is pretty easy. Design of the sheet is up to individual. Mine is inspired from MIT Racecar. I will introduce the path I followed in building the chassis and platform in coming posts.</p>
]]></content:encoded></item><item><title><![CDATA[Demystifying Camera Pipeline in Embedded Vision Systems]]></title><description><![CDATA[Wide variety of industries from surveillance to robotics uses cameras to perceive the world. Choosing the correct implementation of the camera pipeline can make or break the embedded vision applicatio]]></description><link>https://paatwords.com/demystifying-camera-pipeline-in-embedded-vision-systems</link><guid isPermaLink="true">https://paatwords.com/demystifying-camera-pipeline-in-embedded-vision-systems</guid><category><![CDATA[Embedded vision]]></category><category><![CDATA[Computer Vision]]></category><category><![CDATA[camera]]></category><category><![CDATA[embeddedcamera]]></category><dc:creator><![CDATA[Ajay chandra Nallani]]></dc:creator><pubDate>Thu, 20 Jun 2024 22:18:48 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1733514897654/c8fbfcda-66e6-415c-817c-97e0e0a8909b.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Wide variety of industries from surveillance to robotics uses cameras to perceive the world. Choosing the correct implementation of the camera pipeline can make or break the embedded vision application's performance. This article puts light on various aspects of the camera pipeline.</p>
<h2>Photosite</h2>
<p>Everything starts with the lens capturing light from the target scene. Lens mounted onto the camera accumulates the light on focus on to the imager. Imager is a sensor that is sensitive to visible light. The area of sensor is covered with millions of microscopic objects called photosites that are arranged in a rectangular grid. These photosites produce an electrical charge directly proportional to the amount of light if receives. If we consider a camera capable of recording at Full HD, it has approximately 1920x1080 (2M) photosites. Each photosite is responsible for each pixel in the image. These photosites are sensitive to the light but not sensitive to the color. Hence it cannot capture the color. To get a color image, a thin filter is placed over the photosites. This layer is referred to as Color Filter Array (CFA) or Bayer filter in regular usage.</p>
<h2>Demosaicing/Debayering</h2>
<p>This filter consists of a Mosaic of RGB blocks with each of them placed on the photosite. Every pixel records only one color either red, blue or green. In this pattern there will be twice as many green pixels compared to the blue and red ones as human vision is more sensitive to green. This pattern is referred to as Bayer pattern. At each pixel the two missing channels are calculated by interpolating the neighboring pixels. This process is referred to as Debayering or Demosaicing. Most modern cameras apply a non linearity factor called gamma to these images. Human vision is non linear. When the scene is started with dimmed light source and the slowly the intensity of the scene double, the perceived change in intensity will not be double. This behavior is often replicated in cameras using the gamma factor. Also most modern cameras allow to capture linear image i.e. the image without gamma factor. This is often called RAW format. RAW images also contain more than 8 bits per channels allowing to possibility of reading more colors.</p>
<h2>Interfaces</h2>
<p>All the information being read should be available to process in the application layer either in a stored format or live streaming format. Each image being read from the imager contributes to huge amount of data. For example for a Full HD image, for each image, there will be around 2M pixels. Each pixel consists of 3 color channels contributing to over 6M data elements. If 8 bits are used store each data element, approximately 50 Mb/s of information is used to store each image. And running the imager at 60 frames/sec can lead to a data rate of around 3.5GBps. Interfaces help in managing this humongous data rates. Some of the popular ones are MIPI CSI-2, GMSL, USB 3.0 and GigE.</p>
<ol>
<li><p>MIPI CSI-2</p>
<p> This interface was developed with mobile devices in consideration. MIPI CSI-2 interface has around 4 lanes each capable of transferring up to 2.5 Gbps contributing to a maximum bandwidth of 10 Gbps. This interface is much faster and very reliable for handling videos from 1080p to 8k and beyond. It also uses fewer resources from the CPU. The drawback is it relies on extra drivers to work and the maximum length of the cable is under 30 cm.</p>
</li>
<li><p>USB interface</p>
<p> Well known for its plug and play capabilities, USB makes development easy compared to other interfaces. USB 3.0 has a maximum bandwidth of 480 Mbps. This interface cannot run high resolution cameras at high speed. This uses cable up to 5 meters in length. Any cable longer than that uses boosters.</p>
</li>
<li><p>GMSL</p>
<p> GMSL is mainly targeted at the automotive space. It is a multigigabit ethernet point to point connection. It can carry both power and video data over a single coaxial cable up to lengths of 15m. SerDes is the technique that enables this transmission. It can transfer a video at a speed of up to 6 GB/s</p>
</li>
<li><p>GigE</p>
<p> This ethernet interface, can transfer data rate up to 120 MB/s with a maximum cable length of up to 100m. It supports multi camera functionality. It can be seamlessly integrated into various embedded vision applications as it is in the sweet spot of bandwidth, cable length and multi-camera support.</p>
</li>
</ol>
<p>Any embedded camera supports most of these interfaces. These interfaces play a huge role in the performance that can be squeezed out of the application.</p>
<h3>Data Transfer</h3>
<p>Raw data is a lot to transfer around and different applications different ways to handle this data transfer either within the same system or from system to system.</p>
<p>One of the ways is data compression. Especially when working with video/image data, using data compression can save a lot of bandwidth in terms of transfer and also save a lot of storage when the video/image needs to be saved. Data compression can be an blog in its own.</p>
<p>Since this blogs highlights on Embedded vision, two most popular video compression techniques used in video compression are H264 and H265 encoding/decoding. One famous image compression method is JPEG compression. All the compression techniques mentioned here are lossy compression. These exploit the fact that the human visual system is less sensitive to fine detail in the image compared to broader features.</p>
<h3>Glass to glass Latency metric</h3>
<p>Every system design needs a good evaluation system. Since embedded vision systems usually are latency critical systems depending on the application the system needs a good metric to evaluate the credibility of the system. one of the metric used is Glass to glass Latency.</p>
<p>Glass to glass latency is defined as the time delay between when light hits a camera sensor and when the resulting image appears on a viewer's screen. One really good way to measure this is to centralize the emitting of a led light source on a camera lens and its detection on a computer screen. This method doesn't require time synchronization and increases the precision. Glass to glass here is measured as a time delta between when light is activated and when it is detected.</p>
<h3>References</h3>
<ol>
<li>Bachhuber, C., &amp; Steinbach, E. (2016, September). A system for high precision glass-to-glass delay measurements in video communication. In <em>2016 IEEE international conference on image processing (ICIP)</em> (pp. 2132-2136). IEEE.</li>
</ol>
]]></content:encoded></item></channel></rss>