IEEE Paper Implementation on Kaggle with Python
Episode 3 Asthma & Pneumonia Wheeze Classification Using Machine Learning Python Code
Python implementation of the IEEE paper “Spectral Analysis of Lung Sounds for Classification of Asthma and Pneumonia Wheezing”.
Includes Kaggle dataset experiments, signal processing, and ML classification with code available on GitHub. Perfect for biomedical signal processing, machine learning, and healthcare AI projects.
In this video, I break down the IEEE research paper:
“Spectral Analysis of Lung Sounds for Classification of Asthma and Pneumonia Wheezing”
📍 Published at the IEEE International Conference on Electrical, Communication, and Computer Engineering (ICECCE), Istanbul, Turkey
🔗 IEEE Paper: https://ieeexplore.ieee.org/document/...
🔗 IEEE Paper Explained: • Asthma & Pneumonia Detection from Lung Sou...
Conference Location: Istanbul, Turkey
💻 Code Implementations:
Github : https://github.com/mishaurooj/spectra...
Kaggle:
1. https://www.kaggle.com/code/mishauroo...
2. https://www.kaggle.com/code/crdkhan/2...
3. https://www.kaggle.com/code/crdkhan/3...
Python Code implementation Video:
1. • Episode 1: Asthma & Pneumonia Wheeze Class...
2. • Episode 2: Asthma & Pneumonia Wheeze Class...
3. • Episode 3 Asthma & Pneumonia Wheeze Classi...
Spectral analysis of lung sounds
Asthma and pneumonia classification
Wheezing detection using Python
Biomedical signal processing IEEE paper
Lung sound classification Kaggle
Python respiratory sound analysis
IEEE paper implementation GitHub
Machine learning for healthcare AI
Asthma wheezing detection dataset
Pneumonia lung sound classification
How to classify lung sounds using spectral analysis in Python
Asthma vs pneumonia wheeze detection with machine learning
#ai #artificialintelligence #machinelearning
Appreciate you tuning in! 🙌 If you’re ready for tailored advice and hands-on guidance, book a one-on-one consultation with me today. Let’s connect and take your journey to the next level!
Upwork: https://www.upwork.com/freelancers/~0...
Chapters
0:09 – Goal of the Notebook: Classification with machine learning models.
0:22 – Features Recap: Loading Excel file of extracted features.
0:41 – Feeding Models: Splitting into train/test, evaluating with accuracy, precision, recall, F1.
1:19 – Loading Features File: Structure of features + labels.
1:56 – Train-Test Split: 70% training, 30% testing.
2:16 – Models Overview: Intro to ML models for lung sound classification.
2:26 – Linear Discriminant Analysis (LDA): Straight-line classifier, interpretable but limited.
2:46 – k-NN (k-Nearest Neighbors): Rule-based, works on noisy data.
3:02 – Decision Tree (Fine Tree): Nonlinear thresholds, strong performance.
3:37 – Fine k-NN: Very sensitive, memorizes training.
4:00 – Subspace k-NN: Smoothed, better than fine k-NN.
4:16 – Bagged Trees: Ensemble trees, lower variance, strong model.
4:41 – Support Vector Machines (commented out).
4:46 – Metrics Defined: Accuracy, precision, recall, F1 score.
5:49 – Errors Explained: Type I (false positives), Type II (false negatives).
6:20 – Medical Lens: Importance of low false negatives in pneumonia.
6:36 – Confusion Matrices: Visualizing per-class performance.
7:00 – Dataset Split Numbers: 14k training samples, 6k test samples.
7:42 – Linear Discriminant Results: Strong asthma predictions, struggles with normal/pneumonia.
8:49 – Naïve Bayes Results: Assumptions break, poor accuracy.
9:32 – Fine Tree Results: Excellent performance, best so far.
10:44 – Fine k-NN Results: Strong asthma, weaker normal/pneumonia.
11:17 – Bagged Trees Results: Stable, rivals fine tree.
11:40 – Subspace k-NN Results: Better than fine k-NN, still overlaps.
12:11 – Summary Table: Accuracy of each model.
12:57 – Accuracy Leaders: Fine Tree (97.5%) & Bagged Trees (98.2%).
13:44 – F1 Score Comparison: FT & BT dominate, others weaker.
14:11 – Statistical Testing: Need significance test, not just accuracy.
14:35 – McNemar Test Explained: Testing if models differ significantly.
15:26 – Results of McNemar Test: FT significantly outperforms, BT close competitor.
16:25 – Strongest Model: Fine Tree = best choice for deployment.
17:08 – Weak vs Moderate vs Strong Models: Poor = NB, KNB; Moderate = LDA, k-NN; Strong = FT, BT.
17:35 – Conclusion: FT chosen as best classifier.
17:50 – Complete Workflow Recap: Preprocessing → Feature Extraction → Classification.
18:01 – Next Steps: Build applications, explore deep learning.
18:16 – Q&A + Code Sharing: Open for questions, notebook link shared.
18:28 – Outro & Subscribe.
Информация по комментариям в разработке