Postlar filtri


با عرض سلام خيلي از دوستان در رابطه با طراحي صفر تا صد پروژه هاي ديپ از بنده سوال پرسيدن داخل پك زير ٣٦ پروژه رو با جزئيات شرح دادم:

1-Deep Learning Basic
-01_Introduction
--01_How_TensorFlow_Works
2-Classification apparel
-Classification apparel double capsule
-Classification apparel double cnn
3-ALZHEIMERS USING CNN(ResNet)
4-Fake News (Covid-19 dataset)
-Multi-channel
-3DCNN model
-Base line+ Char CNN
-Fake News Covid CapsuleNet
5-3DCNN Fake News
6-recommender systems
-GRU+LSTM MovieLens
7-Multi-Domain Sentiment Analysis
-Dranziera CapsuleNet
-Dranziera CNN Multi-channel
-Dranziera LSTM
8-Persian Multi-Domain SA
-Bi-GRU Capsule Net
-Multi-CNN
9-Recommendation system
-Factorization Recommender, Ranking Factorization Recommender, Item Similarity Recommender (turicreate)
-SVD, SVD++, NMF, Slope One, k-NN, Centered k-NN, k-NN Baseline, Co-Clustering(surprise)
10-NihX-Ray
-optimized CNN on FullDataset Nih-Xray
-MobileNet
-Transfer learning
-Capsule Network on FullDataset Nih-Xray
دوستاني كه نياز به اين پروژه ها دارن ميتونن با بنده در ارتباط باشن.
@Raminmousa
@Machine_learn


Arcade Academy - Learn Python

📖 Book

@Machine_learn




KAG: Boosting LLMs in Professional Domains via Knowledge Augmented Generation

Paper: https://arxiv.org/pdf/2409.13731v3.pdf

Code: https://github.com/openspg/kag

Dataset: 2WikiMultiHopQA

🔸@Machine_learn


Python for Everybody Exploring Data Using Python 3

📓 book

@Machine_learn


Large Language Models Course: Learn by Doing LLM Projects

🖥 Github: https://github.com/peremartra/Large-Language-Model-Notebooks-Course

📕 Paper: https://doi.org/10.31219/osf.io/qgxea

@Machine_learn


04. CNN Transfer Learning.pdf
2.1Mb
📚 Transfer Learning for CNNs: Leveraging Pre-trained Models


Transfer learning is a machine learning technique where a pre-trained model is used as a starting point for a new task. In the context of convolutional neural networks (CNNs), this means using a CNN that has been trained on a large dataset for one task (e.g., ImageNet) as a foundation for a new task (e.g., classifying medical images).


🌐 Why Transfer Learning?


1. Reduced Training Time: Training a CNN from scratch on a large dataset can be computationally expensive and time-consuming. Transfer learning allows you to leverage the knowledge learned by the pre-trained model, reducing training time significantly.
2. Improved Performance: Pre-trained models have often been trained on massive datasets, allowing them to learn general-purpose features that can be useful for a wide range of tasks. Using these pre-trained models can improve the performance of your new task.
3. Smaller Datasets: Transfer learning can be particularly useful when you have a small dataset for your new task. By using a pre-trained model, you can augment your limited data with the knowledge learned from the larger dataset.


💸 How Transfer Learning Works:


1. Choose a Pre-trained Model: Select a pre-trained CNN that is suitable for your task. Common choices include VGG16, ResNet, InceptionV3, and EfficientNet.
2. Freeze Layers: Typically, the earlier layers of a CNN learn general-purpose features, while the later layers learn more task-specific features. You can freeze the earlier layers of the pre-trained model to prevent them from being updated during training. This helps to preserve the learned features
3. Add New Layers: Add new layers, such as fully connected layers or convolutional layers, to the end of the pre-trained model. These layers will be trained on your new dataset to learn task-specific features.
4. Fine-tune: Train the new layers on your dataset while keeping the frozen layers fixed. This process is called fine-tuning.


🔊 Common Transfer Learning Scenarios:


1. Feature Extraction: Extract features from the pre-trained model and use them as input to a different model, such as a support vector machine (SVM) or a random forest.
2. Fine-tuning: Fine-tune the pre-trained model on your new dataset to adapt it to your specific task.
3. Hybrid Approach: Combine feature extraction and fine-tuning by extracting features from the pre-trained model and using them as input to a new model, while also fine-tuning some layers of the pre-trained model.


Transfer learning is a powerful technique that can significantly improve the performance and efficiency of CNNs, especially when working with limited datasets or time constraints.

🚀 Common Used Transfer Learning Meathods:

1️⃣. VGG16: A simple yet effective CNN architecture with multiple convolutional layers followed by max-pooling layers. It excels at image classification tasks.

2️⃣ . MobileNet: Designed for mobile and embedded vision applications, MobileNet uses depthwise separable convolutions to reduce the number of parameters and computational cost.

3️⃣ DenseNet: Connects each layer to every other layer, promoting feature reuse and improving information flow. It often achieves high accuracy with fewer parameters.

4️⃣ Inception: Employs a combination of different sized convolutional filters in parallel, capturing features at multiple scales. It's known for its efficient use of computational resources.

5️⃣ ResNet: Introduces residual connections, enabling the network to learn more complex features by allowing information to bypass layers. It addresses the vanishing gradient problem.

6️⃣ EfficientNet: A family of models that systematically scale up network width, depth, and resolution using a compound scaling method. It achieves state-of-the-art accuracy with improved efficiency.

7️⃣ NASNet: Leverages neural architecture search to automatically design efficient CNN architectures. It often outperforms manually designed models in terms of accuracy and efficiency.

@Machine_learn


با عرض سلام اخرين فرصت مشاركت در اين مقاله تا فردا شب...!


Video oldindan ko‘rish uchun mavjud emas
Telegram'da ko‘rish
🌟 RLtools

🟢TD3 - Pendulum, Racing Car, MuJoCo Ant-v4, Acrobot;
🟢PPO - Pendulum, Racing Car, MuJoCo Ant-v4 (CPU), MuJoCo Ant-v4 (CUDA);
🟢Multi-Agent PPO - Bottleneck;
🟢SAC - Pendulum (CPU), Pendulum (CUDA), Acrobot.





# Clone and checkout
git clone https://github.com/rl-tools/example
cd example
git submodule update --init external/rl_tools

# Build and run
mkdir build
cd build
cmake .. -DCMAKE_BUILD_TYPE=Release
cmake --build .
./my_pendulum




🟡Arxiv
🟡RLTools Design Studio
🟡Demo
🟡Zoo Experiment Tracking
🟡Google Collab (Python Interface)
🖥GitHub


@Machine_learn


📌 Convex Optimization

Book

@Machine_learn


Papers dan repost
با عرض سلام
اولين مقاله ي LLM ما در مرحله ي سابميت. نفر چهارم قابل اضافه كردن مي باشد. جهت مشاركت به ايدي بنده مراجعه كنين.


ExKG-LLM: Leveraging Large Language Models for Automated Expan-
sion of Cognitive Neuroscience Knowledge Graphs


Abstract
Objective: This paper introduces ExKG-LLM, an innovative framework designed to automate expanding cognitive neuroscience knowledge graphs (CNKG) using large-scale linguistic models (LLM). This model includes increasing knowledge graphs’ accuracy, completeness and usefulness in cognitive neuroscience.

Method: To address the limitations of existing tools for creating knowledge accounts, this is especially true in dealing with the complex hierarchical relationships within the cognitive neuroscience literature. We use a large dataset of scientific paper and clinical reports, the ExKG-LLM framework, new entities and relationships in CNKG to apply state - state of the art LLM to extract, optimize and integrate, evaluating performance based on
metrics such as precision, recall and graph density.

Findings: The ExKG-LLM framework achieved significant improvements, including precision of 0.80 (increase of 6.67%), recall of 0.81 (increase of 15.71%), F1 score of 0.805 (increase of 11.81%), and number of edge nodes increased by 21.13% and 31.92%, respectively. Also, the density of the graph decreased slightly. Reflecting the broader but more fragmented structure, engagement rates have also increased by 20%, highlighting areas where stability needs improvement. From the perspective of a complex network, increasing the diameter of CNKG to 15 compared to 13 shows that although the size of ExKG-LLM has increased, more steps are now required to discover additional nodes.Although time complexity improved to 𝑂(𝑛log 𝑛), space complexity became less efficient, rising to 𝑂(𝑛2), indicating higher memory usage for managing the expanded
graph.
journal: https://www.inderscience.com/jhome.php?jcode=ijdmb


هزينه مشاركت ١٢ ميليون
@Raminmousa
@Machine_learn
https://t.me/+SP9l58Ta_zZmYmY0


GAN.pdf
794.1Kb
Text-to-Image Generation with GANs
#GANs
@Machine_learn


Approaching (Almost) Any Machine Learning Problem.pdf
8.0Mb
Approaching (Almost) Any Machine Learning Problem
#Book
#ML

@Machine_learn


Papers dan repost
با عرض سلام
اولين مقاله ي LLM ما در مرحله ي سابميت. نفر چهارم قابل اضافه كردن مي باشد. جهت مشاركت به ايدي بنده مراجعه كنين.


ExKG-LLM: Leveraging Large Language Models for Automated Expan-
sion of Cognitive Neuroscience Knowledge Graphs


Abstract
Objective: This paper introduces ExKG-LLM, an innovative framework designed to automate expanding cognitive neuroscience knowledge graphs (CNKG) using large-scale linguistic models (LLM). This model includes increasing knowledge graphs’ accuracy, completeness and usefulness in cognitive neuroscience.

Method: To address the limitations of existing tools for creating knowledge accounts, this is especially true in dealing with the complex hierarchical relationships within the cognitive neuroscience literature. We use a large dataset of scientific paper and clinical reports, the ExKG-LLM framework, new entities and relationships in CNKG to apply state - state of the art LLM to extract, optimize and integrate, evaluating performance based on
metrics such as precision, recall and graph density.

Findings: The ExKG-LLM framework achieved significant improvements, including precision of 0.80 (increase of 6.67%), recall of 0.81 (increase of 15.71%), F1 score of 0.805 (increase of 11.81%), and number of edge nodes increased by 21.13% and 31.92%, respectively. Also, the density of the graph decreased slightly. Reflecting the broader but more fragmented structure, engagement rates have also increased by 20%, highlighting areas where stability needs improvement. From the perspective of a complex network, increasing the diameter of CNKG to 15 compared to 13 shows that although the size of ExKG-LLM has increased, more steps are now required to discover additional nodes.Although time complexity improved to 𝑂(𝑛log 𝑛), space complexity became less efficient, rising to 𝑂(𝑛2), indicating higher memory usage for managing the expanded
graph.
journal: https://www.inderscience.com/jhome.php?jcode=ijdmb
@Raminmousa
@Machine_learn
https://t.me/+SP9l58Ta_zZmYmY0


New research papers and github codes

🟢Motivo
🟡Paper 🟡Demo 🟡Github
🟢Video Seal
🟡Paper 🟡Demo 🟡Github
🟢Flow Matching
🟡Paper 🟡Github
🟢Explore Theory-of-Mind
🟡Paper 🟡Github 🟡Dataset
🟢Large Concept Model (LCM)
🟡Paper 🟡Github
🟢Dynamic Byte Latent Transformer
🟡Paper 🟡Github
🟢Memory Layers.
🟡Paper 🟡Github

🟢EvalGym .
🟡Paper 🟡Github

🟢CLIP 1.2
🟡Paper 🟡Github 🟡Dataset 🟡Model

@Machine_learn


دوستان خروجي اين كار ٣ تا مقاله خواهد بود...!


Github LLMs dan repost
🌟 LLaMA-Mesh:
🟡Arxiv
🖥GitHub

https://t.me/deep_learning_proj


🌟 AlphaFold 3

🟡Paper
🟡Demo
🖥GitHub


@Machine_learn


Building Blocks for Theoretical Computer Science

🎓 Link

@Machine_learn


📑 Application of graph theory in liver research: A review

📎 Study paper

@Machine_learn

20 ta oxirgi post ko‘rsatilgan.