Trending
Javid Dadashkarimi's Avatar

Javid Dadashkarimi

@dadashkarimi

Postdoc at University of Pennsylvania, former developer at Martinos Center at MGH/Harvard, Yale '23, medical image analysis 🧠, deep learning, connectomics (he/him/his)

132
Followers
288
Following
13
Posts
25.11.2024
Joined
Posts Following

Latest posts by Javid Dadashkarimi @dadashkarimi

Jensen Huang’s Advice for CEOs and Students | NVIDIA
Jensen Huang’s Advice for CEOs and Students | NVIDIA YouTube video by PricePros

β€œPeople who can suffer are ultimately the ones who are the most successful”. Jensen Huang’s, β€”NVIDIA’s CEOβ€” advice to students youtu.be/zqI-EWQG8ZI?...

24.01.2025 13:58 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Post image

I love this quote from Benjamin Franklin: β€˜Either write things worth reading, or do things worth writing.’ @upenn.edu

27.12.2024 03:16 πŸ‘ 4 πŸ” 1 πŸ’¬ 0 πŸ“Œ 0
Post image

Now that those #OHBM abstracts are done - think about submitting to this connectivity workshop. Stellar lineup of speakers is set! Register (and submit abstracts) here -
medicine.yale.edu/mrrc/about/s...
deadline is Jan 10, 2025.

18.12.2024 12:20 πŸ‘ 27 πŸ” 12 πŸ’¬ 1 πŸ“Œ 2
Post image

7/
We tested our method on two datasets:
- HASTE images.
- EPI scans.
And showed that it reaches State-of-the-art performance, especially in younger fetuses. Also, our model is contrast agnostic; it generalizes to various modalities. You can find our preprint at arxiv.org/pdf/2410.20532

29.11.2024 19:23 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Post image

6/
Testing: Step 2 (Fine-Level)
- Model B handles mid-sized patches (96Β³) on the cropped volume. Same for model C with 64Β³ windows.
- The majority voting across A, B, and C defines consistent regions likely containing the brain.
- Model D refines the final binary mask to avoid edge effects.

29.11.2024 19:23 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

5/
Testing: Step 1 (Breadth-Level)
Model A scans large patches (128Β³) for the brain.
Model D tests tiny patches (32Β³) to ensure fine-grained accuracy.
Combined masks crop the image to areas of interest for progressively further refinements.

29.11.2024 19:23 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

4/
To tackle maternal tissues that usually confuse U-Nets, we train 4 U-Nets:
- Each is optimized for different patch sizes.
- Synthetic images include full, partial, and absent brains.
This multi-scale approach prepares us to handle complex scenarios during testing.

29.11.2024 19:23 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image Post image

3/
Our synthesizer has two components:
- One controls the shape of the brain (applied on labels 1 to 7)
- One manages the background (label 0 and labels 8 to 24).

Separate parameters for each category allow us to have fine control over the variability of the shapes (e.g., warping, scaling, noise)

29.11.2024 19:23 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image Post image Post image

2/
During training, we augment label maps with random background shapes:
- A big ellipse (womb-like).
- Contours inside/outside the ellipse.
- Synthetic β€œsticks” and β€œbones” mimicking maternal anatomy.
This creates diverse and realistic label maps.

29.11.2024 19:23 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Video thumbnail

🧡 1/ Do you have limited annotations and need a robust fetal brain extraction model with endless training data?
We introduce Breadth-Fine Search (BFS) and Deep Focused Sliding Window (DFS): a framework trained on infinite synthetic images derived from a small set of annotated seeds (label maps).

29.11.2024 19:23 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

5/
Testing: Step 1 (Breadth-Level)
Model A scans large patches (128Β³) for the brain.
Model D tests tiny patches (32Β³) to ensure fine-grained accuracy.
Combined masks crop the image to areas of interest for progressively further refinements.

29.11.2024 19:10 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Post image

4/
To tackle maternal tissues that usually confuse U-Nets, we train 4 U-Nets:
- Each is optimized for different patch sizes.
- Synthetic images include full, partial, and absent brains.
This multi-scale approach prepares us to handle complex scenarios during testing.

29.11.2024 19:10 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image Post image Post image

3/
Our synthesizer has two components:
- One controls the shape of the brain (applied on labels 1 to 7)
- One manages the background (label 0 and labels 8 to 24).

Separate parameters for each category allow us to have fine control over the variability of the shapes (e.g., warping, scaling, noise)

29.11.2024 19:10 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image Post image Post image

2/
During training, we augment label maps with random background shapes:
- A big ellipse (womb-like).
- Contours inside/outside the ellipse.
- Synthetic β€œsticks” and β€œbones” mimicking maternal anatomy.
This creates diverse and realistic label maps.

29.11.2024 19:10 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Preview
Testing the Tests: Using Connectome-Based Predictive Models to Reveal the Systems Standardized Tests and Clinical Symptoms are Reflecting Neuroimaging has achieved considerable success in elucidating the neurophysiological underpinnings of various brain functions. Tools such as standardized cognitive tests and symptom inventories have p...

Nice work from Anja Samardzija and team. Instead of using CPM to identify networks, networks are predefined and used to evaluate external measures. This provides a framework for the development of improved tests assessing specific brain networks.
www.biorxiv.org/content/10.1...

26.11.2024 12:53 πŸ‘ 20 πŸ” 6 πŸ’¬ 5 πŸ“Œ 1