Physicochemical determinants of blood brain barrier penetrating molecules
Sneha Pandey1, Anoop Kumar Tiwari2, Kottakkaran Sooppy Nisar3, Abhigyan Nath1*
1Department of Biochemistry, Pt. Jawahar Lal Nehru Memorial Medical College, Raipur 492001, India
2Department of Computer Science and Information Technology,
Central University of Haryana, Mahendergarh 123031, India
3Departmentof Mathematics, College of Science and Humanities in Alkharj,
Prince Sattam Bin Abdulaziz University, Alkharj 11942, Saudi Arabia
*Corresponding Author E-mail: abhigyannath01@gmail.com
ABSTRACT:
The blood-brain barrier (BBB) is an essential physiological barrier that regulates the transport of substances from the circulation to the brain. Accurate prediction of BBB permeability is essential for understanding drug delivery to the brain and for developing effective therapies for neurological disorders.Clinical experiments have provided a more accurate measure of BBB permeability.Nevertheless, these methods take time and are labor-intensive.Consequently, several computational methods have attempted to predict BBB permeability; however, their accuracy remains a challenge.Within the scope of this investigation, we provide a novel strategy for enhancing the precision of BBB permeability prediction models. Our model integrates a diverse set of molecular descriptors and employs advanced machine-learning algorithms to identify complex connections between chemical compounds and BBB permeability.By using a large dataset of experimental observations and various resampling techniques, we increased the prediction performance of our model. Different machine learning algorithms (Random Forest (RF) and Gradient Boosting Machine (GBM)) algorithms were used and further analyzed using model agnostic interpretation methods, to accurately predict BBB permeability. The highest accuracy of 92.5% was obtained by RF with feature set of JOELib descriptor (SMOTE oversampled), followed by RF with feature set of JOELib descriptor (GAN oversampled) and the accuracy of 92.1%.Shapley plots, ALE plots, and variable importance plots (VIP) were used to depict the significance of the features.
KEYWORDS: Blood brain barrier penetrating molecules, Gradient Boosting Machines, JOELib, Accumulated Local Effects, SMOTE, GAN.
INTRODUCTION:
The blood-brain barrier (BBB) is an intricate network of specialized blood vessels that are essential for the maintenance of homeostasis in the central nervous system (CNS) by permitting the selective passage of specific substances. Brain capillary endothelial cells (BECs) regulate the BBB's permeability.
Endothelial cells lining the brain's capillaries are tightly packed together, forming tight junctions that prevent large molecules and pathogens from passing through. Accurate prediction models for BBB permeability are vital for drug research, and machine-learning algorithms have been employed for this purpose. However, the development of these models requires thorough examination1. Neurological diseases pose a significant health challenge, affecting 28% of patients across all age groups2, with a substantial increase in mortality to 39% in recent decades3. Despite the decline in communicable diseases, there is a lack of effective treatments for neurological diseases, highlighting the need for potent therapies targeting the CNS4. The BBB is having an important role in CNS homeostasis and protection against toxins, pathogens, infections and it restricts the passage of immune cells to reduce inflammation and protect delicate neural tissue5. However, the challenge lies in the fact that 98% of small molecule drugs are not BBB permeable6, making the prediction of BBB permeability a complex task in drug discovery and development7,8. The evolving trend in drug target validation emphasizes alternative techniques that focus on molecular and signaling pathways at the BBB9. Because drugs need to traverse the BBB to exert therapeutic effects on the CNS, accurately determining BBB permeability is crucial for drug design anddevelopment1,10,11. Although clinical trials provide accurate measurements, they require a significant investment of time and work12–14. The most challenging aspect of utilizing ML algorithms is finding the best characteristics to create prediction models using characterized BBB penetrating datasets15,16. To address this issue, we employed ensemble learners and compared their performances.The present work leveraged the advantages of resampling methods for enhanced prediction of BBB-penetrating molecules and further analyzed black box machine learning models with several modelagnostic explainable approaches17. A schematic representation of the proposed methodology is shown in Fig. 1.
Fig. 1 Schematic representation of the current methodology.
MATERIALS AND METHODS:
Datasets:
In the present study, we used the dataset from the database developed by Meng et al.18 and removed duplicate molecules.The dataset comprised 2,303 molecules of BBB permeability classes, of which 1,697 were permeable molecules and 606 molecules were non-permeable, resulting in an imbalanced dataset.
Molecular descriptors:
The calculation of molecular descriptors can represent quantitative structural or chemical information contained in a molecule. We used two types of descriptors to represent each BBB molecule:JOELib (38 descriptors) and Open Babel (14 descriptors).Both descriptors were calculated using the ChemMine web server19.
Resampling Methods:
Over Sampling algorithms:
In order toproduce a balanced dataset, oversampling is employedto include additional examples of the minority class. In this study, we modified unequal data classes using SMOTE and GAN to obtain balanced datasets.
SMOTE:
The synthetic minority oversampling technique (SMOTE), was developed by Chawla et al. as an intelligent alternative to random oversampling, is a k-nearest neighbor-based approach aimed at generating artificial instances20. Unlike random oversampling, SMOTE addresses the issue of imbalanced datasets by creating synthetic examples for the minority class, thereby minimizing the overfitting problem associated with duplicated samples. The SMOTE method selects samples from a minority class for oversampling, selecting all if the required number exceeds the minority class size, or a random subset if it's less, then identifies k nearest neighbors and multiplies them by weight.
Generative Adversarial Network (GAN):
2014 saw the introduction of GAN by Ian Goodfellow and associates, it has gained popularity and are now widely recognized as a robust method for generative modeling21. The purpose of GAN is to understand the underlying data distribution and producing the same result. The goal of producing enhanced data is to address the imbalance problem.A conventional GAN is composed of two neural networks namely the generator and the discriminator, which have been trained against each other.Adversarial learning is the guiding principle of GAN, which is utilized to enhance the quality of synthetic data.
Under Sampling algorithms:
Undersampling is a strategy utilized to mitigate class imbalance in a dataset with the aim of achieving a more balanced class distribution by decreasing the number of examples in the majority class.In this study, we modified the unequal data classes using random undersampling, uniform sampling (Kennard Stone algorithm), and K-means algorithms to obtain balanced datasets.
Random undersampling:
The technique of random undersampling was utilized to address imbalances in a dataset by reducing the number of instances in the majority class to match the size of the minority class. This technique has the ability to remove important data that is essential for classifier models, yet it is beneficial when dealing with a substantial volume of data22. This method includes picking a subset of instances from the majority class at random while keeping all examples from the minority class.
Kennard-Stone (KS) method:
The goal of the KS algorithm is to select uniformly representative samples from an input space. A similarity or distance metric is computed for each pair of data points in the dataset. It begins by finding the two samples that are farthest apart using the Euclidean distance.The mean of the dataset is first determined, and then the sample closest to the calculated mean is selected. The two data points with the highest dissimilarity or distance between them were selected as the initial seed points for the representative sample.These two points formed the starting point for building a subset.Thealgorithm iteratively selects the data point that is the most dissimilar or the sample that is the farthest from the sample in the representative sample subset already included in the subset.The above procedure is repeated until the required number of samples is obtained23–25.
K-means clustering:
Clustering is a technique employed to group data points in a dataset with the aim of creating clusters with high internal similarity and significant dissimilarity between different clusters26. The K-means clustering algorithm is designed to divide a dataset with a given set of attributes into a specified number of 'K' clusters, with the aim of optimizing homogeneity within each cluster and reducing heterogeneity between distinct clusters prediction27,28. In the K-means method, a fixed number of clusters (K) is chosen from N data points (where N represents the total examples). The algorithm is initiated by randomly selecting K samples as centroids, which are cluster centers defined as vectors consisting of descriptors.
Machine learning algorithms:
In this study, we used two different ensemble learning algorithms, Random Forest (RF) and Gradient Boosting Machine (GBM), to generate predictive models and analyze the differences between BBB-permeable and non-permeable molecules.
Random Forest:
Random Forest (RF) is a form of ensemble learning comprised of multiple base learners, specifically, decision trees. The training phase utilizes a subset of the entire dataset, known as bootstrapped samples, and randomly selects features at each node of the tree29.Thefinal decision classification was determined by combining the classification outputs of all the individual trees.This algorithm uses of the following hyperparameters:sample rate = {0.2, 1, 0.01}, max depth = {20, 40, 60, 80} and ntrees = {50, 100, 300, 500, 800}30.
Gradient Boosting Machine:
The Gradient Boosting Machine (GBM) relies on boosting, a sequential learning process where a loss function is calculated throughout each round of training cycle to determine the quality of fit to the training data.GBM is adept at constructing a predictive model by integrating multiple weak prediction models such as decision trees. This method enables the amalgamation of predictions from diverse learner models to produce a final predictive model with accurate predictions.This algorithm uses the following hyperparameters: col sample rate = {0.2, 1, 0.1},ntrees = {50, 100, 300, 500, 800}, max depth = {20, 40, 60, 80}, learn rate = {0.001, 0.01, 0.1}and sample rate = (0.2, 1, 0.01)31.
We used the H2O package in R to implement all the classification algorithms.A random grid search strategy is used to obtain the best hyperparameters.
Performance evaluation metrics and methods:
For the obtained data, the following performance evaluation metrics were calculated: sensitivity, specificity, accuracy, Matthew's correlation coefficient (MCC), and Area Under the Curve (AUC). These measures are defined as follows:
where P, N, FP’ and FN’ are the true positive, true negative, false positive and false negative respectively. In this context, sensitivity pertains to the precision of predicting the positive class; specificity relates to the precision of predicting the negative class; and accuracy reflects the percentage of correct predictions for both classes. The Receiver Operating Characteristic (ROC) curve utilizes the false-positive rate (1-specificity) as the x-axis and the true-positive rate (sensitivity) as the y-axis to calculate the Area under the Curve (AUC). The AUC, which is used to evaluate and compare the performance of models, ranges from 0 to +1. A higher AUC number indicates better model performance32.
RESULTS AND DISCUSSION:
Two machine learning algorithms based on ensemble methods (RFandGBM) were employed to develop classification models. The outcomes of both algorithms exhibited a level of comparability, with Random Forest demonstrating a slightly superior performance compared to GBM.Table 1 presents the performance evaluation metrices (pems) for the imbalanced dataset using OpenBabel and JOELib molecular descriptors for both the RF and GBM algorithms.The performances of both the RF and GBM algorithms are comparable with RF, obtaining 88.1% accuracy, and GBM achieved an accuracy of 87.6%.Both algorithms performed slightly better on JOELib molecular descriptors.For further model development using various undersampling and oversampling approaches, we selected JOELib molecular descriptors based on their superior performance compared to OpenBabel molecular descriptors (Table 1).
Table 1 Performance evaluation metrics on the imbalanced dataset
|
Algorithm |
Descriptor |
Sensitivity |
Specificity |
Accuracy |
MCC |
AUC |
|
RF |
OpenBabel |
96.0 |
62.4 |
87.1 |
0.653 |
0.897 |
|
|
JoeLib |
95.8 |
66.2 |
88.1 |
0.680 |
0.913 |
|
GBM |
OpenBabel |
97.4 |
56.3 |
86.6 |
0.636 |
0.882 |
|
|
JoeLib |
96.1 |
63.5 |
87.6 |
0.666 |
0.899 |
Table 2 Performance evaluation metrics on using undersampling with JoeLib descriptors
|
Algorithm |
Descriptor |
Sensitivity |
Specificity |
Accuracy |
MCC |
AUC |
|
RF |
Random Undersampling |
89.1 |
76.6 |
83.1 |
0.669 |
0.897 |
|
|
Kennard stone |
90.9 |
68.0 |
79.4 |
0.605 |
0.866 |
|
|
K-means |
87.3 |
79.4 |
83.4 |
0.672 |
0.892 |
|
GBM |
Random Undersampling |
88.1 |
76.4 |
82.5 |
0.656 |
0.894 |
|
|
Kennard stone |
88.9 |
71.0 |
80.1 |
0.612 |
0.867 |
|
|
K-means |
85.3 |
79.1 |
82.2 |
0.651 |
0.879 |
Table 3 Performance evaluation metrics on using oversampling with JoeLib descriptors
|
Algorithm |
Descriptor |
Sensitivity |
Specificity |
Accuracy |
MCC |
AUC |
|
RF |
SMOTE |
93.1 |
91.9 |
92.5 |
0.851 |
0.973 |
|
|
GAN |
96.1 |
88.1 |
92.1 |
0.846 |
0.968 |
|
GBM |
SMOTE |
93.3 |
88.0 |
90.7 |
0.815 |
0.961 |
|
|
GAN |
95.8 |
86.0 |
90.8 |
0.822 |
0.958 |
The performance evaluation metrics (pems) for RF and GBM algorithms using JOELib molecular descriptors and various undersampling approaches (random undersampling, Kennard-Stone undersampling, and k-means based undersampling) are presented in Table 2. The performances of RF and GBM are different for various undersampled datasets. The RF achieved the best overall accuracy of 83.4% on the k-means undersampled dataset, whereas the best overall accuracy for GBM reached 82.5% for the random undersampled dataset. Table 3 presents the pems for RF and GBM on theoversampled dataset using SMOTE and GAN approaches. Both RF and GBM achieved =>90% overall accuracy on SMOTE and GAN-oversampled datasets. The performance of RF was better than that of GBM on both datasets (SMOTE and GAN). The ROC curve for the best model comprising RF trained and evaluated (5-fold cross-validation) on the SMOTE oversampled dataset is presented in Fig. 2 (A). The variable importance plots for the same model are shown in Fig.2(B). PolarSurfaceArea and Number_of_HBA2 are the most important features of the RF model.The SHAPbeeswarm plot for the SMOTE-trained RF model is depicted in Fig.2 (C).PolarSurfaceArea, Number_of_acidic_groups, Number_of_HBA2 and Number_of_O_atoms have contrasting features for the two groups.For all these descriptors, lower values are more favored for BBB non-penetrating molecules than for BBB penetrating molecules.
A
B
C
D
Fig. 2 (A) ROC plots (B) VIP plot (C) SHAP plot -for the RF algorithm on the SMOTE balanced dataset (D) ALE plots for the common features between VIP and SHAP top features.
Furthermore, we used the common important features obtained from the VIP and SHAP analyses to calculate the accumulated local effect plots in Fig.2 (D). An increase in the values of Polar Surface Area and Number_of_HBA2 resulted in a lower average model prediction (prediction in the direction of negative class). The distribution of the top ten features is presented in Fig. 3 (interval plot, individual value plot & boxplot; 0=negative class, 1=positive class).Box plotsclearly demonstrate that the distribution of the top 10 traits varied significantly between the two groups.
A
B
C
Fig. 3 Distribution of features for the two classes- 1 (BBB+ve) and 0 (BBB-ve) using (A) Interval plot, (B) Individual value plot, (C) Boxplots
A
B
C
D
Fig.4(A) ROC plot (B)VIP plot (C) SHAP plot- for the Xgboost model trained on SMOTE dataset (D) SHAP decision plot for the Xgboost model trained on SMOTE oversampled dataset (using 400 samples).
Further, we explored the dataset of BBB+ve and BBB-ve molecules using the XGBoost algorithm (implemented in Python) and processed it with model agnostic methods.The ROC curve, SHAP importance bar plot and SHAP beeswarm plot are presented in Fig. 4 (A), (B) and (C), respectively.The sequence of importance of the molecular descriptors from the XGBoost model corroborates the RF model.Observing the SHAP decision plot (Fig. 4(D)), the major bifurcation for the two classes can be observed at the Number_of_HBD2 descriptor level.Fig. 5(A) presents the SHAP cohort plot, providing a breakdown of the importance of the features between the two groups.Both PolarSurfaceArea, Number_of_HBA2 and Number_of_Acidic_Groups are more important in non-penetrating molecules than in penetrating molecules.The SHAP-dependence plots for the three most important features are shown in Fig. 5(B-D).
A
B
C
D
Fig.5(A) SHAP cohort (B) SHAP dependence plots for the three most important features- for the XGBoost model trained on SMOTE oversampled dataset (JOELib features)
The accumulated local effects for the first five most important features (Number_of_Acidic_Groups, Number_of_HBA2, PolarSurfaceArea, Number_of_O_atomsand Number_of_S_atoms) are shown in Fig. 6 (A).
A
B
Fig. 6 (A) ALE plots (five most important features) (B) Feature interaction plot- for the XGBoost model trained on SMOTE oversampled dataset (JOELib features).
All five molecular descriptors showed distinct differences between the BBB penetrating and non-penetrating classes. Lower Polar Surface Area values were preferred in the positive class.Fig. 6(B) depicts the feature interaction plot for the XGB model, where interactions of Polar Surface Area, Number_of_HBA2 and Number_of_HBD2 features were found to be more prevalent. After examining the figures, it becomes evident that Polar Surface Area, Number_of_HBA2, Number_of_Acidic_Groups, Number_of_HBD2 and Number_of_O_atoms are crucial features in understanding blood brain barrier (BBB) permeability. Polar Surface Area, a molecular descriptor quantifying the surface occupied by polar atoms, indicates that polar molecules face challenges in crossing the BBB owing to their lipophilic nature, which favors nonpolar substances33. A higher hydrogen bond acceptors suggests greater polarity and also hydrogen bond donors influence molecule polarity, affecting their interaction with the lipophilic environment, may hinder crossing of the lipid-rich BBB34. Acidic groups in ionized or non-ionized states influence molecule polarity and interactions with transporters. Oxygen atoms, often in polar groups, also impact molecule polarity and BBB permeability35. These factors are crucial in drug design to maintain therapeutic efficacy.
CONCLUSION:
The blood-brain barrierplays important role in maintaining homeostasis and protecting the CNS from potentially harmful substances. By restricting the entry of foreign molecules, the BBB acts as a defense mechanism, preventing damage to delicate neural tissues. Our study addresses the critical challenge of accurately predicting blood-brain barrier (BBB) penetrating and non-penetrating molecules,which is an essential factor in improvingdrug deliveryto the brain for effective neurological disorder therapies. The developed prediction model, employing Random Forest and Gradient Boosting Machine algorithms, achieved remarkable accuracy, with Random Forest utilizing the JOELib descriptor (SMOTE oversampled) achieving the highest accuracy of 92.5%, followed closely by the JOELib descriptor (GAN oversampled) with an accuracy of 92.1%. To interpret the prediction of models, we employed model-agnostic interpretation methods, including Shapely plots, Accumulated Local Effects (ALE) plots, and Variable Importance Plots (VIP), shedding light on the significance of molecular features.The identified key descriptors, such as those from the JOELib feature set, highlight their crucial role in predicting BBB permeability.
CONFLICTS OF INTEREST:
The authors declare that there are no conflicts of interest.
REFERENCES:
1. Saxena D. Sharma A. Siddiqui M. Kumar R. Blood Brain Barrier Permeability Prediction Using Machine Learning Techniques: An Update. Curr Pharm Biotechnol. 2019;20.
2. Menken M. Munsat TL. Toole JF. The Global Burden of Disease Study: Implications for Neurology. Arch Neurol. 2000; 57(3): 418–20.
3. Feigin V. Vos T. Nichols E. Owolabi M. Carroll W. Dichgans M et al. The global burden of neurological disorders: translating evidence into policy. Lancet Neurol. 2019;19.
4. Kapruwan K. Parida R. Muniyan R. In silico mutational study reveal improved interaction between Beta-Hexosaminidase A and GM2 activator essential for the breakdown of GM2 and GA2 Gangliosides on Tay-Sachs disease. Res J Pharm Technol. 2017; 10(11): 3899–902.
5. Profaci C. Munji R. Pulido R. Daneman R. The blood–brain barrier in health and disease: Important unanswered questions. J Exp Med. 2020; 217.
6. Pardridge W. Blood-brain barrier delivery. Drug Discov Today. 2007; 12: 54–61.
7. Harilal S. Jose J. Parambi DGT. Kumar R. Unnikrishnan MK. Uddin MdS et al. Revisiting the blood-brain barrier: A hard nut to crack in the transportation of drug molecules. Brain Res Bull. 2020; 160: 121–40.
8. Pardridge WM. The blood-brain barrier: Bottleneck in brain drug development. NeuroRX. 2005; 2(1): 3–14.
9. Salman M. Kitchen P. Yool A. Bill R. Recent breakthroughs and future directions in drugging aquaporins. Trends Pharmacol Sci. 2021; 43.
10. Daneman R. Prat A. The Blood-Brain Barrier. Cold Spring Harb Perspect Biol. 2015;7.
11. Rathore S. Prashant V. Prashant D. Nath A. Shivram A. Screening of Phytochemicals for Antisickling effects. Res J Pharm Technol. 2023;16(12):5790–5.
12. Bickel U. How to measure drug transport across the blood-brain barrier. NeuroRX. 2005;2(1):15–26.
13. Massey S. Urcuyo Acevedo J. Marin BM. Sarkaria J. Swanson K. Quantifying Glioblastoma Drug Response Dynamics Incorporating Treatment Sensitivity and Blood Brain Barrier Penetrance From Experimental Data. Front Physiol. 2020; 11:830.
14. Mi Y. Mao Y. Cheng H. Ke G. Liu M. Fang C. et al. Studies of blood–brain barrier permeability of gastrodigenin in vitro and in vivo. Fitoterapia. 2020;140:104447.
15. Hu Y. Zhao T. Zhang N. Zhang Y. Cheng L. A Review of Recent Advances and Research on Drug Target Identification Methods. Curr Drug Metab. 2018;19.
16. Salmanpour MR. Shamsaei M. Saberi A. Klyuzhin IS. Tang J. Sossi V et al. Machine learning methods for optimal prediction of motor outcome in Parkinson’s disease. Phys Medica PM Int J Devoted Appl Phys Med Biol Off J Ital Assoc Biomed Phys. 2020; 69: 233–40.
17. Naresh K. Prabakaran N. Kannadasan R. Boominathan P. Diabetic Medical Data Classification using Machine Learning Algorithms. Res J Pharm Technol. 2018; 11(1): 97–100.
18. Meng F. Xi Y. Huang J. Ayers PW. A curated diverse molecular database of blood-brain barrier permeability with chemical descriptors. Sci Data. 2021; 8(1): 289.
19. Backman TWH. Cao Y. Girke T. ChemMine tools: an online service for analyzing and clustering small molecules. Nucleic Acids Res. 2011;39(suppl_2):W486–91.
20. Chawla N. Bowyer K. Hall L. Kegelmeyer W. SMOTE: Synthetic Minority Over-sampling Technique. J Artif Intell Res JAIR. 2002; 16: 321–57.
21. Goodfellow I. Pouget-Abadie J. Mirza M. Xu B. Warde-Farley D. Ozair S et al. Generative Adversarial Networks. Adv Neural Inf Process Syst. 2014;3.
22. Mohammed R. Rawashdeh J. Abdullah M. Machine Learning with Oversampling and Undersampling Techniques: Overview Study and Experimental Results. ICICS. 2020; 243.
23. Daszykowski M. Walczak B. Massart DL. Representative subset selection. Anal Chim Acta. 2002 Sep 10;468(1):91–103.
24. Kennard R. Stone L. Computer Aided Design of Experiments. Technometrics. 2012; 11: 137–48.
25. Saptoro A. Tadé M. A Modified Kennard-Stone Algorithm for Optimal Division of Data for Developing Artificial Neural Network Models. Chem Prod Process Model. 2012;7.
26. Nath A. Subbiah K. The role of pertinently diversified and balanced training as well as testing data sets in achieving the true performance of classifiers in predicting the antifreeze proteins. Neurocomputing. 2017; 272.
27. Devi SM. Sruthi A. Jothi SC. MRI liver tumor classification using machine learning approach and structure analysis. Res J Pharm Technol. 2018; 11(2): 434–8.
28. Varghese BK. Amali D. Devi K. Prediction of parkinson’s disease using machine learning techniques on speech dataset. Res J Pharm Technol. 2019; 12(2): 644–8.
29. DencelinX L. Ramkumar T. Distributed machine learning algorithms to classify protein secondary structures for drug design-a survey. Res J Pharm Technol. 2017; 10(9): 3173–80.
30. Breiman L. Random Forests. Mach Learn. 2001; 45(1): 5–32.
31. Natekin A. Knoll A. Gradient Boosting Machines, A Tutorial. Front Neurorobotics. 2013; 7: 21.
32. Ling CX. Huang J. Zhang H. AUC: A Better Measure than Accuracy in Comparing Learning Algorithms. In: Xiang Y, Chaib-draa B, editors. Advances in Artificial Intelligence. Berlin, Heidelberg: Springer Berlin Heidelberg; 2003: 329–41.
33. Shityakov S. Neuhaus W. Dandekar T. Förster C. Analyzing Molecular Polar Surface Descriptors to Predict Blood-Brain Barrier Permeation. Int J Comput Biol Drug Des. 2013; 6: 146–56.
34. Kenny PW. Hydrogen-Bond Donors in Drug Design. J Med Chem. 2022; 65(21): 14261–75.
35. Gao Z. Chen Y. Cai X. Xu R. Predict drug permeability to blood–brain-barrier from clinical phenotypes: drug side effects and drug indications. Bioinformatics. 2017; 33(6): 901–8.
|
Received on 27.03.2024 Revised on 15.07.2024 Accepted on 28.09.2024 Published on 27.03.2025 Available online from March 27, 2025 Research J. Pharmacy and Technology. 2025;18(3):1250-1257. DOI: 10.52711/0974-360X.2025.00181 © RJPT All right reserved
|
|
|
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Creative Commons License. |
|