Postlar filtri


Papers dan repost
با عرض سلام برای یکی از کارهای سری‌زمانی با استفاده از Whighted Deep Neural Network و Wavelet نیاز به نویسنده مسول داریم.
نفر ۴ ام از مقاله خواهند بود. هزینه این مشارکت 250 دلار و دوستانی که نیاز دارند جهت بررسی جزئیات بیشتر به ایدی بنده پیام بدن.

@Raminmousa


Github LLMs dan repost
From System 1 to System 2: A Survey of Reasoning Large Language Models

24 Feb 2025 · Zhong-Zhi Li, Duzhen Zhang, Ming-Liang Zhang, Jiaxin Zhang, Zengyan Liu, Yuxuan Yao, Haotian Xu, Junhao Zheng, Pei-Jie Wang, Xiuyi Chen, Yingying Zhang, Fei Yin, Jiahua Dong, Zhijiang Guo, Le Song, Cheng-Lin Liu ·

Achieving human-level intelligence requires refining the transition from the fast, intuitive System 1 to the slower, more deliberate System 2 reasoning. While System 1 excels in quick, heuristic decisions, System 2 relies on logical reasoning for more accurate judgments and reduced biases. Foundational Large Language Models (LLMs) excel at fast decision-making but lack the depth for complex reasoning, as they have not yet fully embraced the step-by-step analysis characteristic of true System 2 thinking. Recently, reasoning LLMs like OpenAI's o1/o3 and DeepSeek's R1 have demonstrated expert-level performance in fields such as mathematics and coding, closely mimicking the deliberate reasoning of System 2 and showcasing human-like cognitive abilities. This survey begins with a brief overview of the progress in foundational LLMs and the early development of System 2 technologies, exploring how their combination has paved the way for reasoning LLMs. Next, we discuss how to construct reasoning #LLMs, analyzing their features, the core methods enabling advanced reasoning, and the evolution of various reasoning LLMs. Additionally, we provide an overview of reasoning benchmarks, offering an in-depth comparison of the performance of representative reasoning LLMs. Finally, we explore promising directions for advancing reasoning LLMs and maintain a real-time \href{https://github.com/zzli2022/Awesome-Slow-Reason-System}{GitHub Repository} to track the latest developments. We hope this survey will serve as a valuable resource to inspire innovation and drive progress in this rapidly evolving field.

Paper: https://arxiv.org/pdf/2502.17419v1.pdf

Code: https://github.com/zzli2022/awesome-slow-reason-system

Datasets: GSM8K - MedQA - MathVista - GPQA - MMLU-Pro - PGPS9K

💠https://t.me/deep_learning_proj


Competitive Programming with Large Reasoning Models
OpenAI∗


Link

@Machine_learn


ByteScale: Efficient Scaling of LLM Training with a 2048K Context Length on More Than 12,000 GPUs

📚 'Read


@Machine_learn


The Hundred-Page Language Models Book

📕 Book

@Machine_learn


AISafetyLab: A Comprehensive Framework for AI Safety Evaluation and Improvement

🖥 Github: https://github.com/thu-coai/AISafetyLab

📕 Paper: https://arxiv.org/abs/2502.16776v1

🌟 Dataset: https://paperswithcode.com/dataset/gptfuzzer

@Machine_learn


🎨 Can AI design truly novel concepts like humans? Check SYNTHIA, a breakthrough in T2I generation!

🤖 SYNTHIA composes affordances to create visually novel & functionally coherent designs.

📄 https://arxiv.org/pdf/2502.17793
💻 https://github.com/HyeonjeongHa/SYNTHIA
🎥 https://youtube.com/watch?v=KvsOx44WdzM


@Machine_learn


Papers dan repost
با عرض سلام نياز به نفر سوم در مقاله زير داريم.

وضعيت: ريوايزد🔥

💠Advancements in Deep Learning for predicting Drug-Lipid interactions in liposomal drug delivery
 
 
🔹Abstract
 
Liposomal drug delivery systems have improved cancer therapeutics by enhancing drug stability, allowing selective tissue targeting, and reducing off-target effects. One of the main problems, however, is how to maximize drug-lipid interaction as well as develop personalized treatment alternatives. Traditional methods in computational biology, such as molecular dynamics simulations, are useful but have challenges in their scalability and cost of computation. This study focuses on the use of deep learning algorithms, Graph Neural Networks (GNNs), Attention Mechanisms, and Physics-Informed Neural Networks (PINNs) for the prediction and optimization of drug-lipid interactions in liposomal formulations. These models are much more advanced, can handle complex datasets with simplified models, and recognize complicated interaction patterns while adhering to the necessary physics involved in the problem. We highlight the practicality of these models in predicting encapsulation efficiency, drug release kinetics, and developing controlled drug delivery systems for cancer treatment through several case studies. Also, the application of transfer learning and meta-learning improves model transferability in different drug-lipid matrices, which is a step towards personalized medicine. Our results highlight that the combination of deep learning with experimental and clinical evidence enhances predictive performance and expands scope, thereby facilitating the formulation of more exact and individualized treatment modalities. Such an interdisciplinary approach can greatly improve treatment efficacy and expand the horizons of precision medicine in the field of nanomedicine.
 
 
Keywords: Liposomal drug delivery, Deep Learning models, Drug-Lipid interactions, Physics-Informed Neural Networks (PINNs), Encapsulation efficiency, Personalized medicine, Nanomedicine.
 
 Journal:https://link.springer.com/journal/11831
If: 9.9
جهت ثبت سفارش به ايدي بنده پيام بدين.
@Raminmousa
@Paper4money
@Machine_learn


Video oldindan ko‘rish uchun mavjud emas
Telegram'da ko‘rish
رمضان الکریم ❤️
@Machine_learn


Papers dan repost
یکی از ابزارهای خوبی که بنده تونستم توسعه بدم ابزار Stock Ai می باشد. در این ابزار از ۳۶۰ اندیکاتور استفاده کردم. گزارشات back test این ابزار در ویدیو های زیر موجود می باشد.

May 2024 :

https://youtu.be/aSS99lynMFQ?si=QSk8VVKhLqO_2Qi3

July 2014:

https://youtu.be/ThyZ0mZwsGk?si=FKPK7Hkz-mRx-752&t=209

از این رو سعی میکنیم مقاله ای این کار رو بنویسیم. شروع مقاله ی این کار ۲۰ اسفند خواهد بود.
دوستانی که می تونن به هر نحوی کمک کنند تا شروع مقاله می تونن نام نویسی کنند.
نفرات ٣ و ٥ اين كار باقي مونده.

@Raminmousa
@Machine_learn
@Paper4money


Github LLMs dan repost
Slamming: Training a Speech Language Model on One GPU in a Day

19 Feb 2025 · Gallil Maimon, Avishai Elmakies, Yossi Adi ·

We introduce Slam, a recipe for training high-quality Speech Language Models (SLMs) on a single academic GPU in 24 hours. We do so through empirical analysis of model initialisation and architecture, synthetic training data, preference optimisation with synthetic data and tweaking all other components. We empirically demonstrate that this training recipe also scales well with more compute getting results on par with leading SLMs in a fraction of the compute cost. We hope these insights will make SLM training and research more accessible. In the context of SLM scaling laws, our results far outperform predicted compute optimal performance, giving an optimistic view to #SLM feasibility. See code, data, models, samples at - https://pages.cs.huji.ac.il/adiyoss-lab/slamming .

Paper: https://arxiv.org/pdf/2502.15814v1.pdf

Code: https://github.com/slp-rl/slamkit



https://t.me/deep_learning_proj




Github LLMs dan repost
Tutorial: Train your own Reasoning model with GRPO

📓 Tutorial

https://t.me/deep_learning_proj




Papers dan repost
یکی از ابزارهای خوبی که بنده تونستم توسعه بدم ابزار Stock Ai می باشد. در این ابزار از ۳۶۰ اندیکاتور استفاده کردم. گزارشات back test این ابزار در ویدیو های زیر موجود می باشد.

May 2024 :

https://youtu.be/aSS99lynMFQ?si=QSk8VVKhLqO_2Qi3

July 2014:

https://youtu.be/ThyZ0mZwsGk?si=FKPK7Hkz-mRx-752&t=209

از این رو سعی میکنیم مقاله ای این کار رو بنویسیم. شروع مقاله ی این کار ۲۰ اسفند خواهد بود.
دوستانی که می تونن به هر نحوی کمک کنند تا شروع مقاله می تونن نام نویسی کنند.
نفرات ٣ و ٥ اين كار باقي مونده.

@Raminmousa


ًThe Data Science Design Manual

📓 Book


@Machine_learn


OSUM: Advancing Open Speech Understanding Models with Limited Resources in Academia

Large Language Models (LLMs) have made significant progress in various downstream tasks, inspiring the development of Speech Understanding Language Models (SULMs) to enable comprehensive speech-based interactions. However, most advanced SULMs are developed by the industry, leveraging large-scale datasets and computational resources that are not readily available to the academic community. Moreover, the lack of transparency in training details creates additional barriers to further innovation. In this study, we present OSUM, an Open Speech Understanding Model designed to explore the potential of training SLUMs under constrained academic resources. The OSUM model combines a Whisper encoder with a Qwen2 LLM and supports a wide range of speech tasks, including speech recognition (ASR), speech recognition with timestamps (SRWT), vocal event detection (VED), speech emotion recognition (SER), speaking style recognition (SSR), speaker gender classification (SGC), speaker age prediction (SAP), and speech-to-text chat (STTC). By employing an ASR+X training strategy, OSUM achieves efficient and stable multi-task training by simultaneously optimizing ASR alongside target tasks. Beyond delivering strong performance, OSUM emphasizes transparency by providing openly available data preparation and training methodologies, offering valuable insights and practical guidance for the academic community. By doing so, we aim to accelerate research and innovation in advanced SULM technologies.

Paper: https://arxiv.org/pdf/2501.13306v2.pdf

Code: https://github.com/aslp-lab/osum

Datasets: LibriSpeech - IEMOCAP



@Machine_learn


Mathematics of Backpropagation Through Time.

📕 Paper

@Machine_learn


📃 Methods of decomposition theory and graph labeling in the study of social network structure


📎 Study the paper

@Machine_learn


preprints202502.0982.v1.pdf
1018.1Kb
PKG-LLM: A Framework for Predicting GAD and MDD Using Knowledge Graphs and Large Language Models in Cognitive Neuroscience

Ali Sarabadani,Hadis Taherinia,Niloufar Ghadiri,
Ehsan Karimi Shahmarvandi,
Ramin Mousa  *

Abstract
Purpose: This research project has a single purpose: the construction and evaluation of PKG-LLM, a knowledge graph framework whose application is primarily intended for cognitive neuroscience. It also aims to improve predictions of relationships among neurological entities and improve named entity recognition (NER) and relation extraction (RE) from large neurological datasets. Employing the GPT-4 and expert review, we aim to demonstrate how this framework may outperform traditional models by way of precision, recall, and F1 score, intending to provide key insights into possible future clinical and research applications in the field of neuroscience. Method: In the evaluation of PKG-LLM, there were two different tasks primarily: relation extraction (RE) and named entity recognition (NER). Both tasks processed data and obtained performance metrics, such as precision, recall, and F1-score, using GPT-4. Moreover, there was an integration of an expert review process comprising neurologists and domain experts reviewing those extracted relationships and entities and improving such final performance metrics. Model comparative performance was reported against StrokeKG and Heart Failure KG. On the other hand, PKG-LLM evinced itself to link prediction-in-cognition through metrics such as Mean Rank (MR), Mean Reciprocal Rank (MRR), and Precision at K (P@K). The model was evaluated against other link prediction models, including TransE, RotatE, DistMult, ComplEx, ConvE, and HolmE. Findings: PKG-LLM demonstrated competitive performance in both relation extraction and named entity recognition tasks. In its traditional form, PKG-LLM achieved a precision of 75.45\%, recall of 78.60\%, and F1-score of 76.89\% in relation extraction, which improved to 82.34\%, 85.40\%, and 83.85\% after expert review. In named entity recognition, the traditional model scored 73.42\% precision, 76.30\% recall, and 74.84\% F1-score, improving to 81.55\%, 84.60\%, and 82.99\% after expert review. For link prediction, PKG-LLM achieved an MRR of 0.396, P@1 of 0.385, and P@10 of 0.531, placing it in a competitive range compared to models like TransE, RotatE, and ConvE. Conclusion: This study showed that PKG-LLM mainly outperformed the existing models by adding expert reviews in its application in extraction and named entity recognition tasks. Further, the model's competitive edge in link prediction lends credence to its capability in knowledge graph construction and refinement in the field of cognitive neuroscience as well. PKG-LLM's superiority over existing models and its ability to generate more accurate results with clinical relevance indicates that it is a significant tool to augment neuroscience research and clinical applications. The evaluation process entailed using GPT-4 and expert review. This approach ensures that the resulting knowledge graph is scientifically compelling and practically beneficial in more advanced cognitive neuroscience research.


Link: https://www.preprints.org/manuscript/202502.0982/v1
@Machine_learn

20 ta oxirgi post ko‘rsatilgan.