Hi Team,
I am working on the project related for studying the plant health/growth/stress. I am a newbie in this area. At present, I am doing the image acquisition using Raspberry Pi RGB & NoIR cameras of the plant leaves. Can i use this framework that is focused for Multi spectral imaging for my case? Can this framework help me with the image analysis related like (for e.g. Spectral reflectance estimation from the RGB image, measuring vegetation indices, say pattern analysis for early detection of plant disease etc ).
Kindly, pls share your thoughts.
Try deleting the file (imagej/plugins/micaToolbox/importsetting.txt and see if that helps. Otherwise make sure you have installed ImageJ to a place on your system without weird characters in the path (e.g. try installing it somewhere with a very simple path on your system).
Yes, the micaToolbox is ideally suited to doing things like this – combining images from multiple bands of the spectrum, and calibrating them to create linear, normalised images for objective analysis. You might need to make your own camera configuration file. The default configuraitons are simple RGB, or visible RGB plus the blue and red channels from a UV image (green is useless in UV). You can use that as a guide, and maybe use the IR in place of UV.
Yes, you could easily use the Naive bayes classifier to determine the colour of diseased leaf parts and quantify coverage using the particle analysis. If you wanted to know if a potential herbivore could detect these patches you could cluster your images using the RNL clustering. If you simply wanted to quantify morphological differences between healthy and sick leaves then the QCPA pattern parameters provide a rich variety of parameters designed to capture any differences. Best, Cedric
Yes, the analysis of visual information in a 3D context is a rather understudied aspect of visual ecology. It is, without a doubt (although not quite yet), possible to use QCPA in a 3D context. 3D scanning technology is readily available for cheap. One of the issues with calibrated photography relates to depth effects, i.e. problems with making inferences on the physical properties of surfaces which are not in the same plane/orientation as a calibration standard. That’s a detail though and totally possible to overcome or work around. For now, it is designed to work primarily in a 2D context. One can always try to contextualize such 3D scans with 2D imagery, or, similar to photogrammetry use a set of calibrated 2D images from various angles to ‘paint’ a 3D scan with cone catch information.
Hi Team,
I am working on the project related for studying the plant health/growth/stress. I am a newbie in this area. At present, I am doing the image acquisition using Raspberry Pi RGB & NoIR cameras of the plant leaves. Can i use this framework that is focused for Multi spectral imaging for my case? Can this framework help me with the image analysis related like (for e.g. Spectral reflectance estimation from the RGB image, measuring vegetation indices, say pattern analysis for early detection of plant disease etc ).
Kindly, pls share your thoughts.
Also, I am getting macro error at importsettings.txt . The system cannot find specified path error when i try to test the tool.
Try deleting the file (imagej/plugins/micaToolbox/importsetting.txt and see if that helps. Otherwise make sure you have installed ImageJ to a place on your system without weird characters in the path (e.g. try installing it somewhere with a very simple path on your system).
Yes, the micaToolbox is ideally suited to doing things like this – combining images from multiple bands of the spectrum, and calibrating them to create linear, normalised images for objective analysis. You might need to make your own camera configuration file. The default configuraitons are simple RGB, or visible RGB plus the blue and red channels from a UV image (green is useless in UV). You can use that as a guide, and maybe use the IR in place of UV.
Hi Rob,
Yes, you could easily use the Naive bayes classifier to determine the colour of diseased leaf parts and quantify coverage using the particle analysis. If you wanted to know if a potential herbivore could detect these patches you could cluster your images using the RNL clustering. If you simply wanted to quantify morphological differences between healthy and sick leaves then the QCPA pattern parameters provide a rich variety of parameters designed to capture any differences. Best, Cedric
hi what about analysis 3d shapes or 3d images by animals??
Yes, the analysis of visual information in a 3D context is a rather understudied aspect of visual ecology. It is, without a doubt (although not quite yet), possible to use QCPA in a 3D context. 3D scanning technology is readily available for cheap. One of the issues with calibrated photography relates to depth effects, i.e. problems with making inferences on the physical properties of surfaces which are not in the same plane/orientation as a calibration standard. That’s a detail though and totally possible to overcome or work around. For now, it is designed to work primarily in a 2D context. One can always try to contextualize such 3D scans with 2D imagery, or, similar to photogrammetry use a set of calibrated 2D images from various angles to ‘paint’ a 3D scan with cone catch information.