Tracking unlabeled cancer cells imaged with low resolution in wide migration chambers via U-NET class-1 probability (pseudofluorescence)
Journal of Biological Engineering volume 17, Article number: 5 (2023)
Cell migration is a pivotal biological process, whose dysregulation is found in many diseases including inflammation and cancer. Advances in microscopy technologies allow now to study cell migration in vitro, within engineered microenvironments that resemble in vivo conditions. However, to capture an entire 3D migration chamber for extended periods of time and with high temporal resolution, images are generally acquired with low resolution, which poses a challenge for data analysis. Indeed, cell detection and tracking are hampered due to the large pixel size (i.e., cell diameter down to 2 pixels), the possible low signal-to-noise ratio, and distortions in the cell shape due to changes in the z-axis position. Although fluorescent staining can be used to facilitate cell detection, it may alter cell behavior and it may suffer from fluorescence loss over time (photobleaching).
Here we describe a protocol that employs an established deep learning method (U-NET), to specifically convert transmitted light (TL) signal from unlabeled cells imaged with low resolution to a fluorescent-like signal (class 1 probability). We demonstrate its application to study cancer cell migration, obtaining a significant improvement in tracking accuracy, while not suffering from photobleaching. This is reflected in the possibility of tracking cells for three-fold longer periods of time. To facilitate the application of the protocol we provide WID-U, an open-source plugin for FIJI and Imaris imaging software, the training dataset used in this paper, and the code to train the network for custom experimental settings.
The regulation of many biological processes is mediated by the migration of cells from one anatomical location to another to exert their function. For example, primordial germ cell migration in zebrafish is essential to ensure the correct organ development . Moreover, the correct development of proper immune responses requires a fine-tuned regulation of leukocyte trafficking and migration [2,3,4].
Amongst the mechanisms involved in cell migration, chemotaxis polarizes cells and controls the direction of migration toward favorable locations [5, 6]. Hence, several studies focus on the knowledge of the molecular mechanisms and signaling pathways that regulate chemotaxis in vitro and in vivo. However, the directional movement of cells is regulated not only by the type of soluble cues diffused into and retained by the environment but also by the environment itself [7, 8].
Therefore, engineered microenvironments are essential to study cell migration in vitro. Amongst these, 3D migration is a setting where cells are embedded in collagen-like fibers to mimic the extracellular matrix (ECM) in vitro [6, 9] (Fig. 1A). Widefield Microscopy (WM) is an established technique to perform long-term imaging studies in large migration chambers. This technique can be applied by recording the fluorescence intensity or the intensity of the light transmitted through the sample, without necessarily requiring fluorescent staining. In this imaging modality, the acquired data consists of a series of 2D grayscale images captured over time.
However, when WM is applied to study the migration of motile cells in large 3D migration chambers, the analysis of the acquired series of images presents specific challenges. Indeed, the classical analysis pipeline involves three steps: cell detection, cell tracking, and computation of motility measures [10, 11]. The application of such a pipeline is hampered at the first step, due to drastic changes in cell shape that introduce cell detection errors. These changes are associated with the frequent squeezing of the cytoskeleton while migrating through dense ECM , or introduced as an artifact during the migration along the z-axis. In the last case, cells are imaged outside the focal plane, leading to blurred and enlarged shapes in the acquired images.
Additionally, depending on the experimental settings, cells can require a long period to exert a directional movement. Hence, long acquisition times are needed. Long-time acquisitions may prevent the usage of fluorescent staining (used to facilitate cell detection) due to photobleaching or phototoxicity. Hence, imaging of unlabeled cells using transmitted light (TL) is necessary. Lastly, analysis of cells following long tracks (i.e., > 150 μm), demands a large field of acquisition and necessitates low magnification (i.e. 4X objective). Therefore, the resolution is another challenge for cell detection and subsequently compromises tracking (Fig. 1B).
Recent advances in artificial intelligence methods applied to bioimage analysis remarkably improved the accuracy of cell detection and subsequently tracking [12,13,14]. Amongst these, end-to-end neuronal networks with convolutional layers such as the U-NET  and its variants that transform an input image into another image as output, improved the segmentation of complex structures with respect to single pixel classifiers , gaining application in both biomedical imaging for cell detection, counting, and morphological analysis [17, 18]. The usage of U-NET was also demonstrated to improve cell and tracking due to the increased robustness of object detection on binary masks rather than on original images which may suffer from non-uniform illumination or poor signal to noise ratio [19,20,21,22].
Although U-NET was applied to many different imaging modalities and cell types, a pipeline to specifically analyze time-lapse images of unlabeled cells acquired with low magnification via brightfield microscopy in 3D migration chambers is still missing.
Therefore, we propose WID-U (U-NET for WIDe migration chambers), a plugin for common bioimaging software such as Imaris (Oxford instruments) and FIJI, that converts the TL signal from brightfield microscopy into a fluorescent like signal (pseudofluorescence) corresponding to the class 1 probability from the U-NET. The signal generated by WID-U yielded an efficient detection of the cells using standard spot-detection methods available in TrackMate [23, 24] and Imaris, which subsequently improved cell tracking accuracy in images with low resolution from 3D in vitro environments.
Pipeline to convert TL to pseudo-fluorescence
To convert the TL signal from unlabeled cells to pseudo-fluorescence, we developed an image processing pipeline based on deep learning. Such a pipeline is specifically developed to face the challenges arising when images of cells are acquired in large 3D migration chambers, at low magnification (4x) and large fields of view (2 mm × 2 mm). To account for such low magnification and large fields of view, images are processed with a sliding window of 56 × 56 pixels (~ 92 μm × 92 μm) (Fig. 2A, red square). Subsequently, each window is upscaled by a factor of 4 to 224 × 224 pixels, and processed via a patch classifier based on the U-NET architecture . (Fig. 2B). Such architecture receives as input the upscaled TL images (Fig. 2B, grayscale image), and generates as output an image where the intensity of each pixel is the class-1 probability, or pseudofluorescence (Fig. 2B, magenta-colored image). The output of the U-NET is then downscaled and reassembled as a new imaging channel of the original image (Fig. 2C). To train the network, datasets consisting of upscaled image pairs were manually created using a custom tool that virtually zooms-in on selected areas of different videos and at different time points. This allowed examples of cells lying at different focal planes, and in areas with different illumination, conferring to the trained network robustness to brightness/contrast changes (Fig. S1).
Enhanced cell detection and tracking of B cell lymphoma in 3d migration chambers
We applied the proposed pipeline to analyze videos of VAL cells (a Germinal Center-derived B cell lymphoma), acquired in 3D microenvironments. A dataset of 150 image pairs was used to train the network (Fig. 3A). Then, we compared the quality of the pseudofluorescence signal with respect to TL, or real fluorescence emitted by cyan fluorescent protein positive (CFP+) cells. The proposed pipeline yielded a significant improvement in the signal-to-noise ratio (SNR) with respect to TL images (Fig. 3B), and a higher but not significantly improvement in SNR with respect to real fluorescence (CFP). In contrast to CFP, the intensity of the pseudo-fluorescent signal did not suffer from photobleaching (Fig. 3C). Despite noise was introduced by the automatic adjustment of focal plane at each time point, the mean intensity of the pseudo-fluorescent cells never decreased below 80% of the intensity at the initial time point.
Moreover, the proposed pipeline increased the visibility of cells, which were out of focus or with deformed shapes (Fig. 3D). Altogether, these properties make pseudofluorescence similar to real fluorescence, but with increased stability over time.
To validate the effect of pseudofluorescence on the quality of cell tracking, we performed automatic cell tracking using TL, real fluorescence (CFP+ cells), or pseudofluorescence signals. Pseudofluorescence yielded significantly more accurate tracks than the original TL signal, with an average three-fold increase in the track duration (Fig. 3E). In comparison with real fluorescence, track duration was longer, especially in the late time points when the fluorescent signal was fading (Fig. 3B-E). In general, pseudofluorescence decreased the number of tracking errors, resulting in fewer interrupted tracks (Fig. 3F) and fewer glitches when cells were in close proximity (Fig. 3G), facilitating the automatic tracking of cells using pseudofluorescence in large 3D microenvironments (Fig. 3H, Fig. S2).
Additionally, we manually tracked 195 cells from three different experiments corresponding to 23,991 spots (all the cells in the field of view were tracked, respectively 40, 96, 59 tracks, and 5182, 10,117, 8692 spots) and compared these tracks with the ones obtained by applying the proposed pipeline (Fig. 3I). The true positive rate of spot detection was greater than 95%, while the false positive rate was lower than 17% and a false negative rate lower than 5%. To evaluate the accuracy of cell tracking instead (i.e. penalizing track switch errors), we performed a multi-object tracking analysis (MOTA)  obtaining a MOTP score greater than 0.74 (Fig. 3J).
Enhanced cell detection and tracking of MDA-MB231 breast cancer cells with heterogeneous shapes
To validate the applicability of the pipeline to track cell types with substantially different morphologies, we performed chemotaxis assays using MDA-MB-231, a human breast cancer cell line established from a pleural effusion of a 51-year-old Caucasian female with metastatic mammary adenocarcinoma. These are epithelial adherent cancer cells with a heterogeneous morphology, either spindle-shaped (long and thin) or rounded. A dataset with 73 image pairs from 3 independent experiments depicting cells with both morphologies was generated (Fig. 4A). Then, the network was re-trained and applied to convert the TL signal to pseudofluorescence. The computed class-1 probability was used as input to a tracking algorithm based on threshold detection and linking, obtaining significantly longer tracks with respect to the ones obtained using the raw TL signal (Fig. 4B-C).
To facilitate the execution of the pipeline for TL to fluorescence conversion, a plugin for the Imaris (Oxford instruments), and a plugin for FIJI bioimaging software have been developed. The protocol that makes use of the plugin to enhance cell tracking is summarized in (Fig. 5). Briefly, it will be sufficient to load the image sequence in the preferred imaging software, then launch the WID-U plugin to transform the TL signal into a new channel with pseudofluorescence. Finally, automated cell tracking can be performed on the generated new channel. Regarding the installation, the software comes with three different parts: the plugin itself, the program to re-train, if needed, the network for custom cell types, and the program that performs the computations using deep learning. These last two programs can be either installed on the same machine where FIJI/Imaris is installed or configured on a remote machine dedicated to computation (a CUDA-enabled machine is recommended to speed up the execution). To configure the connection to such a machine, it will be sufficient to configure the IP address in the plugin (Fig. S3). Instructions are included in the README file. Additionally, to use WID-U without a GPU-enabled machine, we made available a macro to export the images, process them on a remote machine (i.e. , a free online deep learning platform such as Google COLAB), and a macro to import the results in the desired software (Fig. S3).
Materials and methods
Cell line and cultures
VAL cells were cultured in RPMI-1640 medium supplemented with 10% heat-inactivated fetal bovine serum (FBS), 1% Penicillin/Streptomycin, 1% GlutaMAX, 1% NEAA, 1% sodium-pyruvate, and 50 μM β-mercaptoethanol. CFP+ cells were cloned as described previously . MDA-MB-231 cells were cultured in Dulbecco’s Modified Eagle Medium (DMEM) containing D-glucose 4.5 g L1, and glutaMAX (619650–026, GIBCO, ThermoFisher Scientific, Switzerland) supplemented with fetal bovine serum 10% (16000–044, GIBCO, ThermoFisher), and penicillin-streptomycin 1% (15,070,063, GIBCO, ThermoFisher Scientific). Cells were incubated under standard culture conditions (CO2 5%, O2 95%, 37 °C).
3D migration of VAL cells was performed using the 3D chamber μ-Slide from Ibidi as described by Antonello et al. https://doi.org/10.3389/fimmu.2022.1067885. Briefly, cells were embedded in a collagen matrix formed by 1.6 mg/mL PureCol (Collagen, Sigma-Aldrich), 0.36% PBS supplemented with 0.36% FBS, 0.036% P/S, 1.5 μg/mL recombinant human ICAM-1/CD54 Fc chimera (R&D systems) at 4 °C. The temperature was slowly raised over 45 min to 37 °C to induce a homogeneous collagen fiber polymerization. Complete medium was added to both side reservoirs. After 24 hours, 15 μL of 400 nM CXCL12 were added to one of the reservoirs and time-lapse video microscopy was performed for 6 hours at 20 seconds time intervals using an ImageXpress® (Molecular Devices) high throughput microscope, equipped with an incubation system set to CO2 5%, O2 95%, 37 °C, with a Nikon Plan Apo 4x / NA 0.2 and 20 mm working distance objective. CFP was excited with a Lumencore LED lamp, excitation band 438/24, collection band 483/32. For brightfield imaging, the condenser was set to Koehler illumination. Collection was performed with an Andor Zyla sCMOS camera. Migration assays of MDA-MB-231 cells were performed using the μ-Slide chemotaxis chambers from Ibidi, according to the manufacturer’s instructions. Briefly, cells were resuspended at 3 × 106 cells ml− 1 in Dulbecco’s Modified Eagle Medium (DMEM) and glutaMAX (619650–026, GIBCO, ThermoFisher Scientific, Switzerland) supplemented with fetal bovine serum 1% (16000–044, GIBCO, ThermoFisher). The observation area of the chamber was filled with cells, and the chamber was placed in the incubator (CO2 5%, O2 95%, 37 °C) for 1 h to allow cell adherence. Time-lapse video microscopy images were recorded for 18 h with a time interval of 600 seconds using the ImageXpress as described above.
Transformation of TL images into pseudofluorescence
TL images were converted into pseudofluorescence by employing an end-to-end neural network with convolutional layers based on the U-NET architecture . A dataset with pairs of TL and pseudofluorescence (binary masks) was created manually drawing the contour of cells. Such dataset included 150 images of 56 × 56 pixels (91 × 91 μm) from different experiments. These images were upscaled to 224 × 224 pixels to facilitate annotation then downscaled to 112 × 112 pixels for training and augmented to 15′000 images. The trained network was then applied to convert images of size > = 1000 × 500 pixels, to pseudofluorescence by classifying a moving window of 56 × 56 pixels. The class-1 probability computed by the U-NET was used as pseudofluorescence. Hyperparameters on training can be found in the code in Supplementary material. Briefly, Adam optimizer, loss = binary cross-entropy, initial learning rate = 10− 4, batch size = 2, epochs = 60. Data augmentation was performed using Keras image generators, with rotation range = 0.5, zoom range = 0.5, vertical and horizontal shift = 0.5, shear range = 0.2, horizontal and vertical flip, and zero filling.
Initially, TL images were transformed into pseudofluorescence images. Then, cells were detected and tracked using the Spots tracking functionality of the Imaris software (Oxford Instruments, v.7.7.2) in the original TL channel, in the imaging channel capturing fluorescence, and in the pseudofluorescence channel. In all cases, an estimated spot diameter of 8 um was selected and background subtraction was enabled to account for non-uniform illumination. Tracking was performed using an autoregressive motion model, with a maximum distance of 20 μm, and a maximum gap size of 0. Tracks shorter than 300 seconds were excluded from the analysis. Tracks were divided into two classes (WT and CFP+ cells), based on the mean fluorescence intensity of the imaging channel centered on CFP. Finally, the duration of each track was computed. Tracks outside the migration channel were deleted manually.
Automatic tracking was also performed using TrackMate in FIJI  using the LoG spot detector or automatic thresholding, and Simple Lap tracker for spot tracking. The same threshold described before for the tracking in Imaris were used also for the tracking in TrackMate.
Image and tracking measures
Track measures were exported from Imaris or TrackMate and processed in Matlab to compute track duration, fluorescence decay and SNR. SNR was estimated as [avg (FG) – avg. (BG)] / std. (BG) where FG is the intensity of the pixels in the foreground and BG pixels in the background as previously described .
SNR and Track duration values were analyzed with PRISM software. Statistics performed with ONE WAY ANOVA * P < 0.05, **P < 0.01 ****P < 0.0001.
The application of deep learning to in vitro time-lapse imaging improved the tracking accuracy of cancer cells in large 3D migration microenvironments. In this paper, this was achieved by training a U-NET architecture with a custom dataset for B cell lymphoma (globular shapes) and a custom dataset for breast cancer cells (heterogeneous shapes). To apply our pipeline to other cell types or other imaging modalities, the network can be re-trained. In the cases included in this paper, 70 to 150 image pairs with data augmentation were sufficient to improve tracking accuracy. However, to enhance the robustness of cell detection, image pairs included in the training set should be representative of cells in different areas of the microenvironment. The creation of the training set was facilitated by upscaling. This is associated with an increased precision during the manual annotation when cells had a small area (i.e., 4 pixels), and was completed by an imaging expert in less than 6 hours for both cases. However, to minimize the number of training examples required, we recommend the usage of transfer learning approaches .
The tracking was improved mainly as a consequence of more accurate spot detection. Several tools based on supervised machine learning are nowadays available for cell segmentation. However, to track cell centroids over time, accurate shape reconstruction may not be required. Indeed, the class-1 probability generated by the U-NET, which decreases towards the borders of the cells (Fig. S4), demonstrated particularly appropriate for the subsequent application with a variety of spot detection methods as previously demonstrated [19,20,21,22], and allows the user to select more or less detected objects by defining an intensity threshold, in line with the most common tracking pipelines. In this study, we tested two approaches typically used in bioimaging: watershed with background subtraction (Imaris) and LoG detector (FIJI/TrackMate). In both cases, performances improved. Performances increased also when automatic thresholding (FIJI/TrackMate) was used for spot detection. This suggests that pseudofluorescence is uniform across the field of view and spot detection based on intensity becomes possible. Our protocol facilitates the computation of this signal systematically, either automatically sending data from FIJI/Imaris to a cluster with a GPU or allowing export/import data to be processed on free deep learning resources such as Google COLAB.
The multi-object tracking analysis revealed a MOTP score of 0.74 when a simple LAP tracking algorithm was used . This suggests that the method can be applied to automatize the analysis, despite further improvements can be obtained by more advanced linking algorithms, or manual post-correction. To further enhance accuracy, pseudofluorescence can be used in combination with recently developed methods for cell detection, such as those based on geometric properties and deep learning .
In conclusion, the proposed pipeline allowed cell tracking in large 3D migration chambers over extended periods of time, retaining the simplicity of cell detection as when real fluorescence is used, but avoiding photobleaching and other side effects of cell labeling.
Availability of data and materials
Data and code are released under the Open Source GPL v3 license at https://github.com/pizzagalli-du/wid-u The datasets used to train the U-NET architecture (crops and binary masks) are available as PNG files in training_data_VAL.zip and training_data_MDA-MB-231.zip for VAL lymphoma and MDA-MB-231 breast cancer cells respectively. We also provide the weights of the pre-trained U-Net models for VAL and MDA-MB-231 cells observed with a pixel size of 1.62 um (UNET_VAL.h5, and UNET_MDA-MB-231.h5). The U-NET model was defined and trained in the Keras (v 2.3.1) framework, on a TensorFlow-GPU (v 2.2.0) backend, using Python (v 3.8.3). the script to re-train the network with custom dataset is available in the file “train_unet.py”, while the program to create the training dataset with upscaling and manually drawing binary masks was written in Matlab r2019b and is available in the file “annotate_crops.m”.
Raw microscopy data are deposited on the IMMUNEMAP Open Data platform (www.immunemap.org).
Boldajipour B, Mahabaleshwar H, Kardash E, et al. Control of chemokine-guided cell migration by ligand sequestration. Cell. 2008;132(3):463–73. https://doi.org/10.1016/j.cell.2007.12.034.
Luster AD, Alon R, von Andrian UH. Immune cell migration in inflammation: present and future therapeutic targets. Nat Immunol. 2005;6(12):1182–90. https://doi.org/10.1038/ni1275.
Madri JA, Graesser D. Cell migration in the immune system: the evolving inter-related roles of adhesion molecules and proteinases. Dev Immunol. 2000;7(2–4):103–16. https://doi.org/10.1155/2000/79045.
Mayor R, Etienne-Manneville S. The front and rear of collective cell migration. Nat Rev Mol Cell Biol. 2016;17(2):97–109. https://doi.org/10.1038/nrm.2015.14.
Sallusto F, Baggiolini M. Chemokines and leukocyte traffic. Nat Immunol. 2008;9(9):949–52. https://doi.org/10.1038/ni.f.214.
Yamada KM, Sixt M. Mechanisms of 3D cell migration. Nat Rev Mol Cell Biol. 2019;20(12):738–52. https://doi.org/10.1038/s41580-019-0172-9.
SenGupta S, Parent CA, Bear JE. The principles of directed cell migration. Nat Rev Mol Cell Biol. 2021;22(8):529–47. https://doi.org/10.1038/s41580-021-00366-6.
De la Fuente IM, López JI. Cell motility and cancer. Cancers (Basel). 2020;12(8):1–15. https://doi.org/10.3390/cancers12082177.
Sixt M, Lämmermann T. In vitro analysis of chemotactic leukocyte migration in 3D environments BT - cell migration: developmental methods and protocols. In: Wells CM, Parsons M, editors. Methods in Molecular Biology: Humana Press; 2011. p. 149–65. https://doi.org/10.1007/978-1-61779-207-6_11.
Pizzagalli DU, Farsakoglu Y, Palomino-Segura M, et al. Data descriptor: leukocyte tracking database, a collection of immune cell tracks from intravital 2-photon microscopy videos. Sci Data. 2018;5:1–13. https://doi.org/10.1038/sdata.2018.129.
Beltman JB, Marée AFM, De Boer RJ. Analysing immune cell migration. Nat Rev Immunol. 2009;9(11):789–98. https://doi.org/10.1038/nri2638.
Ulman V, Maška M, Magnusson KEG, et al. An objective comparison of cell-tracking algorithms. Nat Methods. 2017;14(12):1141–52. https://doi.org/10.1038/nmeth.4473.
Berg S, Kutra D, Kroeger T, et al. Ilastik: interactive machine learning for (bio) image analysis. Nat Methods. 2019;16(12):1226–32. https://doi.org/10.1038/s41592-019-0582-9.
Stringer C, Wang T, Michaelos M, Pachitariu M. Cellpose: a generalist algorithm for cellular segmentation. Nat Methods. 2021;18(1):100–6. https://doi.org/10.1038/s41592-020-01018-x.
Ronneberger O, Fischer P, Brox T. U-net: convolutional networks for biomedical image segmentation. In: Navab N, Hornegger J, Wells WM, Frangi AF, editors. Bt - Medical Image Computing and Computer-Assisted Intervention – Miccai 2015: Springer, Cham; 2015. p. 234–41.
Alzahrani Y, Boufama B. Biomedical image segmentation: a survey. Sn Comput Sci. 2021;2(4):310. https://doi.org/10.1007/s42979-021-00704-7.
Falk T, Mai D, Bensch R, et al. U-net: deep learning for cell counting, detection, and morphometry. Nat Methods. 2019;16(1):67–70. https://doi.org/10.1038/s41592-018-0261-2.
Durkee MS, Abraham R, Clark MR, Giger ML. Artificial intelligence and cellular segmentation in tissue microscopy images. Am J Pathol. 2021;191(10):1693–701. https://doi.org/10.1016/j.ajpath.2021.05.022.
Wen C, Miura T, Voleti V, et al. 3Deecelltracker, a deep learning-based pipeline for segmenting and tracking cells in 3D time lapse images. Elife. 2021;10(1):1–37. https://doi.org/10.7554/eLife.59187.
Lugagne JB, Lin H, Dunlop MJ. Delta: automated cell segmentation, tracking, and lineage reconstruction using deep learning. PLoS Comput Biol. 2020;16(4):1–18. https://doi.org/10.1371/journal.pcbi.1007673.
Sun Z, Song Y, Li Q, et al. An integrated method for tracking and monitoring stomata dynamics from microscope videos. Plant Phenomics. 2021;2021:9835961. https://doi.org/10.34133/2021/9835961.
Wanli Y, Huawei L, Fei W, Zhou D. Cell tracking based on multi-frame detection and feature fusion. In 2021 3rd International Conference on Advanced Information Science and System (AISS 2021) (AISS 2021). New York: Association for Computing Machinery; 2022. Article 47, 1–6. https://doi.org/10.1145/3503047.3503098.
Tinevez JY, Perry N, Schindelin J, et al. TrackMate: an open and extensible platform for single-particle tracking. Methods. 2017;115:80–90. https://doi.org/10.1016/j.ymeth.2016.09.016.
Ershov D, Phan MS, Pylvänäinen JW, et al. TrackMate 7: integrating state-of-the-art segmentation algorithms into tracking pipelines. Nat Methods. 2022;19(7):829-832. https://doi.org/10.1038/s41592-022-01507-1.
Karatzas D, Shafait F, Uchida S, et al. ICDAR 2013 robust reading competition. In: 2013 12th International Conference on Document Analysis and Recognition: IEEE; 2013. p. 1484–93. https://doi.org/10.1109/ICDAR.2013.221.
Puddinu V, Casella S, Radice E, et al. ACKR3 expression on diffuse large B cell lymphoma is required for tumor spreading and tissue infiltration. Oncotarget. 2017;8:85068–84 www.impactjournals.com/oncotarget.
Fazeli E, Roy NH, Follain G, et al. Automated cell tracking using StarDist and TrackMate. F1000Research. 2020;9(1279):1–13.
We are thankful to Benedikt Thelen for technical discussions.
The study was supported by the Swiss Cancer league with the grant KFS-4223-08-714 2017-R (MT) and by the Platform for Advanced Scientific Computing with the grant ExaTrain (RK). The IRB is supported by the Helmut Horten Foundation.
Ethics approval and consent to participate
Consent for publication
Authors express their consent for publication.
The authors declare no competing financial interests.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Benchmark in response to intensity shifts. A. representative micrographs of images with shifted signal excursion (minimum) from 0 to 232. B. Segmentation accuracy metrics vs. intensity shifts. IoU refers to Intersection over Union. Pixel difference is the mean error in pixel intensity vs. manually annotated binary masks. n = 4 different image series, with 32 different levels.
Supplementary Fig. 2. Representative micrograph showing tracks of cells obtained with PF (magenta square), TL (black square), or CFP (cyan square), using Trackmate analysis. Detection of cells was performed using either LogDetector (above) or Threshold detector (below)
Usage workflow. The pipeline to convert low-resolution TL images to pseudofluorescence is made available via the WID-U plugin. For sporadic uses it is possible to export the images and process them on an online deep learning platform such as Google COLAB (top). For routine uses the tool can be easily installed on a GPU-enabled machine or on same computer by using a virtual machine. In Imaris, once installed and configured for communication with a deep learning-enabled machine, the user can launch the plugin and use the Spots/Surfaces tools for cell detection/tracking. In FIJI, once the image has been opened, the user can launch the WID-U Plugin. It will ask for the IP address of the machine to be used for the computation. Once completed, a new imaging channel will be created and the TrackMate plugin can be used for spot detection and tracking
Class-1 probability decay. Representative map of the class-1 probability showing peaks inside the objects
About this article
Cite this article
Antonello, P., Morone, D., Pirani, E. et al. Tracking unlabeled cancer cells imaged with low resolution in wide migration chambers via U-NET class-1 probability (pseudofluorescence). J Biol Eng 17, 5 (2023). https://doi.org/10.1186/s13036-022-00321-9