Name Prashant Dimri Student ID17210692 E-mail [email protected] Programme Masters in Data Analytics Module code MCM Date of submission 10-08-2018 Project Title Credit Scoring in Retail Banking Supervisor Dr. Martin Crane Disclaimer An report submitted to Dublin City University, School of Computing for module CA640 Professional and Research Practice, 2017/2018. I understand that the University regards breaches of academic integrity and plagiarism as grave and serious. I have read and understood the DCU Academic Integrity and Plagiarism Policy. I accept the penalties that may be imposed should I engage in practice or practices that breach this policy. I have identified and included the source of all facts, ideas, opinions, viewpoints of others in the assignment references. Direct quotations, paraphrasing, discussion of ideas from books, journal articles, internet sources, module text, or any other source whatsoever are acknowledged, and the sources cited are identified in the assignment references. I declare that this material, which I now submit for assessment, is entirely my own work and has not been taken from the work of others save and to the extent that such work has been cited and acknowledged within the text of my work. By signing this form or by submitting this material online I confirm that this assignment, or any part of it, has not been previously submitted by me or any other person for assessment on this or any other course of study. By signing this form or by submitting material for assessment online I confirm that I have read and understood DCU Academic Integrity and Plagiarism Policy (available at http//www.dcu.ie/registry/examinations/index.shtml) Name(s) Prashant Dimri Date 10-08-2018 Credit scoring in Retail Banking Predicting Creditworthiness of Borrowers Prashant Dimri Masters in Data Analytics Dublin City University Dublin, Ireland [email protected] Abstract- Credit scoring techniques are used for determining the creditworthiness of a borrower that is to determine whether to give a loan to a borrower or not based on credit scoring. The higher the score, the more advantageous it is for banks to give a loan to borrowers. The aim of the paper is to develop a retail credit scoring model using a variety of techniques (logistic regression, clustering, Random Forest and KNN) and compare them, to investigate how such factors as the incorporation of more variables improve the accuracy of the model, what are the top five factors influencing the risk, how important credit and demographic variables are. Keywords- Credit Scoring, Logistic Regression, Retail banking, Credit Risk for credit scoring. 1 Introduction Credit is an agreement in which borrower receives funds from banks or financial institution and agrees to repay the lender in some future date with interest. The business aim of commercial banks is to give credit to borrowers in order to levee a rate of interest for the transaction. In the credit process, there comes a credit risk. Credit risk arises when a borrower defaults in repayment of a loan which can be caused by various reasons like insolvency of borrowers, wilful default (when a borrower intentionally doesnt pay) etc. History signifies that ineffective credit risk management can lead any banks or financial institutions to bankruptcy. So, it is imperative for banks or financial institutions to observe and accumulate information about potential borrowers and to review the performance of an accepted borrower over time, thus the quality of loan is very important for survival and for the lending institutions profitability. Customer credit scoring helps in cost and time in a lending institutions decision making thus improving the profitability of banks. The need for a formal process for credit scoring first started in the 1960s when there was a boom in the credit card business and automatic decision-making process became vital for business growth 1. Credit scoring Credit scoring is a method of mitigating the probability of a default by a lending institutions customers and thus maximize the profitability of the bank or financial institutions by minimizing the ensuing risk to them. It is one of the most important tools in making better credit management decisions. A credit scoring model is used to predict the creditworthiness of a borrower and thus enable lenders to see the most important variables in the decision-making process 2. Anderson in 3 broke credit scoring into two parts Credit means buy now, pay later and scoring refer to numerical tool to rank order cases according to real or perceived quality to discriminate them and to ensure objective and consistent decision. So, credit scoring is simply the use of statistical models to transform data into a numerical form for better decision making. So, the Credit score determines how risky borrower is – the higher the score, the safer it is for banks to give loans to borrowers. Historically, techniques like regression analysis, logistic regression, support vector machine, decision trees, neural networks, etc. have been widely used in building credit scoring models 4. Thus, credit scoring has been of utmost importance in making the tremendous growth in credit possible over the last five decades. Without credit scoring, lenders would not have been able to expand their credit efficiently 5. 1.2 Traditional and credit scoring approach The traditional approach to sanction a loan is normally based on a judgment call of just analysing the details on the application form of the borrowers by a loan officer. It is based on the so-called 5 Cs Principle (Character, Capital, Capacity, Collateral, and Condition). Its success depends upon the experience and common sense of the loan officer 6. With the help of a statistical model, credit scoring converts data based on these traditional criteria of 5 Cs of credit into numerical form to make credit decision that is to determine whether future customers will default or not. According to the so called Basel II Capital Accord (Basel Committee on Banking Supervision, 2015), any loan which is not repaid within 90 days is considered as a Non-Performing Loan. As with Credit Scoring Model, a loan officer devises a quantitative model to delineate between acceptable and unacceptable applicants. This method tends to reduce the time and cost spent by the loan officer on loan assessment, hence decreasing the default ratio. Thus, it is far better than the traditional approach of loan assessment 7. 2 Literature review 2.1 Credit scoring in retail banking Most credit scoring is done with non-retail loans as the data tends to be more readily available. Also, the amount of money for lending in the non-retail sector tend to be higher than the retail sector 8. In retail lending, various socio-demographic variables (age, number of dependents etc.) along with credit bureau variables (monthly income, debt ratio etc.) of customers are taken to make a prediction about the clients portfolio. Through this, credit scoring is developed for estimating the probability of default on retail loans. Blazy and Weill in 9 have stated that riskier loans should be collateralized or else should not be financed. Dinh and Kleimeier in 10 proposed a credit scoring model for Vietnamese retail loans leading them to conclude that credit risk modelling has helped banks to radically reduce time, the cost spent on loan assessment and thus help in increase in the profitability of the business. Hasan in 11 made a retail credit scoring model for scarce data to find the probability of default on retail loans and concluded that even with scarce data, the construction of a model can be achieved enabling decision makers to expedite the credit appraisal process. Hand in 12 examined predictive models whereby scorecards were used to assign customers to classes thus leading to the proper course of action being taken based on a customers predicted score being above or below a given threshold. Common performance measures in use in retail banking like Gini Coefficient, Kolmogorov-Smirnov statistic, etc. may not use relevant information about the magnitude of scores thus can lead to possible misclassification leading to degradation in decision quality. Kocenda and Vojtek in 13 concluded that taking account of socio-demographic factors was imperative during the time spent giving credit and along these lines, such factors ought not to be rejected from credit scoring model determination. Abdou et al in 14 compared performances of various models by using ROC curves and Gini coefficients which were used for evaluation criteria and Kolmogorov test which was used for robustness with using different techniques- Logistic regression, Classification and Regression tree and Cascade Correlation Neural Network (CCNN). They found out that CCNN was superior to other techniques. Also, variables like previous occupation, borrowers account functioning, guarantees, other loans and monthly expenses were identified as key variables for forecasting and decision-making processes of a credit policy. 2.2 Credit Scoring Techniques As seen above, statistical techniques have been used in building scoring models. Below we profile some of the commonest techniques like regression, linear programming, logistic regression, support vector machines, k-nearest neighbour, decision trees etc. Linear regression is a method describing the relationship between a response variable and independent variables by a linear relationship. A time-wise analysis of factors such as customers, payments, guarantees, default rates can be done. Orgler in 15 used regression analysis for commercial loans but this model was only useful in evaluating existing loans and thus only used for loan review. In 16, the author evaluated some outstanding consumer loans using this regression method. He concluded that the information that is not on the application form had a greater predictive ability than information there on the application form that is through linear regression algorithm greater predictability was obtained than without it. Logistic regression has long been one of the most widely used statistical techniques. The method differs from linear regression in the sense that the outcome variable in logistic regression is of dichotomous (0/1) form. Logistic equation is estimated by a technique known as maximum likelihood estimation. So, most often logistic regression with more than one independent variable is done using Maximum likelihood estimation 17. Hand and Henley in 18 applied various statistical techniques like logistic regression, neural networks, and recursive partitioning, etc. for building the credit scoring model. They came to the conclusion that apart from classification of customers into good and bad based on their initial application characteristics, there are also various statistical challenges in credit scoring like loan review functions (to know when to approach customers for repayment of their loans), fraud, questions like when and how to act on delinquent loans, etc. Hand and Zhou in 19 studied two behavioural classifications (settle immediately versus not settle immediately and make some repayment versus make no repayment) and the prediction was made by ruling in which class each customer belongs. The aim was to construct a rule that will allow objects to be assigned to one of the classes (here 0 and 1). The rule is constructed from past data for a sample of objects. In this, there are two fundamental aspects of classification rules that were considered when performance was evaluated- The first was the score distribution for two classes as 0 and 1. A second aspect was the choice of classification threshold (t) such that objects with scores greater than t are predicted to belong to class 1 and to class 0 otherwise. Misclassification arose when an object with a score above t belong to class 0 and object with a score below t belong to class 1. Performance is vital in choosing a rule appropriately and thus getting accurate predictions of the future behaviour of customers. Bekhet and Eletter in 20 proposed two credit scoring models (Logistic regression and Radial basis function) utilizing data mining techniques to help advance choices for the Jordanian banks. They found that assessment of application in advance would enhance credit choice viability and control advance office tasks and in addition spare time and cost for analysis and concluded that logistic regression model was slightly better than the radial basis function model in terms of overall accuracy rate, but the radial basis function model was better in identifying those customers who might default. Clustering is when segmentation on the dataset is done such that homogenous clusters are made i.e. objects within a group are similar to each other and different from the object in another group and next credit scoring can be done on each homogeneous segment. Though it can lead to additional cost due to development, implementation, maintenance etc., there is at the same time the possibility of improved performance 21. Scitovski and arlija in 21 performed cluster analysis by segmenting retail clients with the help of adaptive Mahalanobis clustering such that each segment can be suitable for separate credit scoring, hence leading to better risk assessment of the retail client. The Mahalanobis partition algorithm is used to grouping of data points set. With the given dataset, it is possible to search successively for an optimal partition with k 2, 3,.clusters by using a clustering. Validity indexes are used to determine the partition with a most appropriate number of clusters. So, based on the description of each cluster, banks could decide to develop a separate credit scoring for each cluster and so to create a business strategy for each cluster. Bakoben et al in 22 used cluster analysis of the behaviour of credit cards accounts to help assess credit risk level. The author found it interesting clusters of real credit card behaviour data, in addition to superior prediction and forecasting of account default based on the clustering outcome. Random Forest is a combination of tree predictors where each tree depends upon values of a random vector which are sampled independently and have the same distribution for all trees in the forest 23. It can be used for either categorical or continuous response variable. Sharma in 24 attempted to improve credit scoring modelling using the random forest approach. The paper describes the random forest giving a better approach to analysing variable importance for datasets in which variables have multicollinearity and thus getting better predictive accuracy. The paper further shows that the random forest gives more robust finding than regression models as the former provides a powerful analysis to assess the meaning of the variables, something not present in regression models. Sullivan et al in 25 proposed an improved framework of logistic regression with information from decision trees. Logistic regression underperforms random forest because of its inability to model non-linear effects perfectly, thus penalised logistic regression is introduced with predictive variable given by univariate and bivariate threshold effects. It further showed penalised logistic regression has good predictive power and outperform traditional logistic regression while competitive to random forest. K-Nearest Neighbour (KNN) determines in which group data-points will fall into by determining how close a data-point is to a group, that is it will fall into that group which is closest to it 26. Guo et al.,2003 in 27 aims at improving the drawbacks of KNN which are low efficiency by being a lazy learning method which prevents it in many applications and by being dependent on selection of good value of K. In this paper, the value of K is automatically determined which differs for different data, thus adequate for classification accuracy. So, it concludes that dependencies on K reduces by the construction of the model and thus makes classification faster. Henley and Hand in 28 describe the construction of credit scoring using KNN. Selection of distance metrics was an important part of the analysis. In this, Euclidian metrics was selected, and this approach helped in classification. Further, its result was compared with other techniques like logistic regression and decision trees. 2.3 Performance evaluation criteria Some of the performance evaluation criteria of the credit scoring model are- confusion matrix, receiver operating characteristics (ROC) curve, gain chart etc. 29. Description of these performance criteria are given below- Confusion matrix- It looks at how often the model has correctly predicted an event. The average correct classification rate measures the percentage of good and bad credit ratings in a dataset. Correct classification rates emerged from a matrix called a confusion matrix 30 otherwise it is classification matrix 31. It is a set of a number of actual and predicted observation in a dataset. There is an estimated misclassification cost in which lenders reject loans applications which is actually good (so-called false negatives) or accept a loan application which is actually bad (so-called false positives) lead to misclassification. As a result, it leads to Type 1 and Type 2 error. Sensitivity is also called as recall which is true positive divided by true positive plus false negative whereas specificity (also called as precision) is the ratio of true positive to true positive plus false positive. Receiver Operating Characteristic (ROC)- has long been used to detect true and false rates of classification 32. It is a graph of sensitivity (true positive) on y-axis and specificity (false positive) on the x-axis. Sensitivity represents bad customer classified as bad and specificity represent good customer classified as bad 33. The closer the curve to the y-axis (the true positive) the better the model is. The so-called Area under the curve (AUC) in the ROC plot serves as a better performer than overall accuracy as the latter is based on a specific cut-off point while ROC takes all the cut-off point and thus plot sensitivity and specificity plot. Thus, in short, when we compare the overall accuracy, we are measuring the accuracy based on some cut-off point which concludes that accuracy varies for a different cut-off point. By default, the cut-off point is 50 34. Gain chart- It is used to determine how much better one can do with predictive models than without. In this, validation sample is scored (predictive probability) and then ranked in descending order by predictive probability. The ranked file is then split into deciles such that an equal number of observations are there in each decile and then the cumulative number of actual events are taken with a conclusion that the predictive outcome should come higher than the observed outcome for better model accuracy 35. 3 Data and Pre-processing In this paper, techniques like Logistic Regression, Clustering, Random Forest and KNN are used to build the credit scoring model. In our dataset, there are 6.74 of bad loans i.e. default (Class 1) and 93.26 of good loans i.e. non-default (Class 0). It contains 150000 data points with 11 variables in it and is taken from GitHub- https//github.com/plotly/datasets/blob/master/data.csv. Demographic variables are like Age and Number of dependents whereas credit bureau variables like Revolving utilisation of unsecured loan, Number of 30.59 days past due not worse, Debt ratio, Monthly income, Number of open credit lines or loans, and Number real estate loans or lines. Data description is shown below in Table 1- VariablesDescriptionSeriousdlqin2yrsPerson repaying loan after 90 days past due dateMonthly income Monthly income of a customer Revolving utilisation of unsecured loans Total balance on credit cards and personal lines on credit divided by sum of credit limits Age Age of a customer Number of 30.59 past due not worse Number of times borrower has done the repayment after 30-59 days past due date. Debt ratio Monthly debt payments divided by monthly gross income Number of 60.59 days past due not worse Number of times borrower has done the repayment after 60-59 days past due date. Number of open credit lines or loans Number of open loans (like car loans or mortgage) and lines of credit (credit card) Number real estate lines or loans Number of mortgage and real estate loans Table1 Data contains missing values in the Monthly Income variable which are imputed with the help of linear regression. Correlation is checked with the help of Pearsons correlation and one of the highly correlated variables is removed. So, removing variables- number of 60_89 days past due not worse and number of times 90 days late as they are highly correlated with the variable number of 30_59 days past due not worse. Next, outlier detection is done. Here, the percentile capping method is also known as winsorization. 36 method is used, through which a value in higher percentile is replaced by a value in next lower percentile i.e. when there is a much increase in the value in a higher percentile say 3 times higher than the value in the lower percentile, then the value in the higher percentile is replaced by the value in the next lower percentile. So, in short, it is used to limit extreme values in a data to reduce the effect of outliers. Finally, the dataset is divided into two datasets- Training and testing dataset with 7030 ratio that is 70 of the data points are kept in training dataset and rest 30 in testing dataset. 4 Analysis of Results 4.1 Logistic Model Two iterations of the Logistic Regression model were done as in the first iteration some of the variables were statistically insignificant, thus retaining only those variables which were statistically significant (i.e. those variables with P-Values less than 5 namely- number of 30_59dayspastduenot worse, number of open credit lines and loans, number of dependents, monthly income and age) and doing the iteration second time to check for statistically insignificant variable if any. There is no statistically insignificant variable was found in the second iteration as shown in Table 2 below with the help glm function in R- Coefficients Estimate Std. Error z value Pr(z) (Intercept) -1.802991 0.084297 -21.389 2e-16 NumberOf Time30.59Days PastDueNotWorse 1.014009 0.015124 67.047 2e-16 NumberOfOpen CreditLines AndLoans -0.026735 0.002843 -9.405 2e-16 NumberOf Dependents 0.060666 0.011063 5.484 4.16e-08 moninc 0.368350 0.075812 4.859 1.18e-06 age -0.028723 0.001022 -28.102 2e-16 Table 2 Variance for each variable is less or equal to 1 which is appropriate. The accuracy in the test dataset and area under the ROC curve as shown in Figure 1 are found to be 72.8 and 75.4. respectively, whereas the accuracy in the training dataset is 74. The Kappa value is 0.1486 which is quite significant. The Gain Chart showings that the first decile has 36.5 of good customers and reaching above 50 in the second decile that is 54.4. Thus, concluding that the higher the percentage of observations in the first few deciles, the better the predictive power of the model is as shown in Figure 2 below. Figure1 Figure2 The confusion matrix shows 30036 as True Negative (TN), 1871 as True Positive (TP), 10867 as False Negative (FN) and 1049 as False Positive (FP) as shown in Table 3 below- Predicted Class Actual ClassClass 0 Class 1Class 0 30036 (TN)1049 (FP)Class 1 10867 (FN)1871 (TP) Table3 Here, the recall is 14.6. The AUC with only demographic and credit bureau variables considered separately and examined, come out to be 63.69 and 71.6 respectively, indicating credit bureau variables explaining dependent variable more than demographic variables. The top 5 factors influencing the dependent variable are Number Of 30.59 Days Past Due Not Worse, Monthly Income, Number of Dependents, age and Number of open credit lines or loans. Table 4 below shows how the incorporation of variables changes the performance of the model- VariablesAUC (Area under the curve)Number of 30.59 days past due not worse68.3Number of 30.59 past due not worse number of open credit lines or loans71.1Number of 30.59 past due not worse number of open credit lines or loans number of dependents71.1Number of 30.59 past due not worse number of open credit lines or loans number of dependents monthly income71.8Number of 30.59 past due not worse number of open credit lines or loans number of dependents monthly income age75.4 Table 4 4.2 Random Forest and K-Nearest Neighbour The area under the curve of ROC curve was found to be 80 as shown in Figure 3 with 93.3 accuracy in the test dataset and 95 accuracy in the training dataset. The confusion matrix in Table 5 below shows that 183 are True Positives (TP), 40726 are True Negatives (TN), 166 as False Negatives (FN), and 2748 as False positives (FP) – Predicted Class Actual ClassClass 0 Class 1 Class 0 40726 (TN)2748 (FP)Class 1 166 (FN)183 (TP) Table5 Here, the recall is 52.4 which is far better than logistic regression. The top 5 Important variables in random forest which influence the dependent variable are ranked in descending order namely- Revolutionising utilisation of unsecured lines or loans, debt ratio, age, number of 30_59 days past due not worse and number of dependents as shown in Figure 4 Figure 3 Figure 4 Variables like revolutionising utilisation of unsecured lines or loans and debt ratio were found to be the most important variables by the random forest method even though they are not statistically significant in logistic regression due to the latters limited ability to handle non-linear relationships (it is, after all a form of generalised linear model). So, a tree-based approach is a better one for handling variables with a non-linear relationship with the dependent variable 37. In K- nearest neighbour, the AUC of ROC curve was found to be 74 as shown in Figure 5 with accuracy of 93.13, with K23. Figure 5 Confusion Matrix is shown in Table 6 below- Predicted Class Actual ClassClass 0 Class 1 Class 0 40862 (TN)2849 (FP)Class 1 71 (FN)41 (TP) Table 6 Here recall is 36.6 which is better than logistic regression. 4.3 Clustering With the help of the K-means technique, clustering is done with the help of those variables which are statistically significant on normalised data. Clustering is done keeping in mind that the within cluster sum of squares should be as small as possible and between cluster sum of squares is maxima. Hence four clusters are made as up to 4 clusters as there is a good amount of fall in the within cluster sum of square as shown in Figure 6, after which a logistic regression technique is applied on each cluster to check upon whether the performance of the model can be increased. The AUC (as shown in graph 4) of cluster1, cluster2, cluster3 and cluster4 are coming out to be- 74.6, 73.80, 72.60 and 75.80 as shown in Figure 7 with quite significant Kappa values in each case. Thus, it shows that the performance is not increasing beyond 75 with clustering. Figure 6 Figure 7 5 Conclusions and Discussions We have seen that the performance of the model has increased in the case of random forest over logistic regression, logistic regression after clustering and KNN. The accuracy for random forest, logistic regression and KNN in the test dataset are 93.3, 72.8 and 93.13 respectively. AUC for random forest is 80 while that of logistic regression and KNN are around 75, thus showing an increase of 5 improvement in performance of random forest over logistic regression and KNN as shown in Figure 8. This may be due to as logistic regression is analogous to linear relationship, so its not a perfect technique to have a good result for independent variables having non-relationship with dependent variable. So, for this, the tree-based approach is a better approach, here random forest is used which works better in handling influence of non-relationship of independent variable on dependent variables. The area under the curve (AUC) is taken into consideration over the accuracy in determining the performance of the model as AUC takes all the cut-off point into consideration while the accuracy is based on a specific cut-off point, so the accuracy varies with different cut-off points. Performance did not increase beyond 75 even with a combination of clustering and logistic regression. On comparing random forest, logistic regression and KNN based on the recall, it tells that the recall of random forest is higher than the recall of other two techniques and the recall of KNN is higher than the recall of logistic regression. Finally, it concludes random forest is best applied technique followed by KNN and logistic regression. Results could have been better with more data. Figure 8 References 1 Sean Trainer. Long Twisted History of Credit Score. 2015, july 22, http//time.com/3961676/history-credit-scores/. 2 D.J.Hand, and S.D.Jacka. Statistics in Finance. wiley, 1998. 3 Anderson, Raymond. The Credit Scoring Toolkit Theory and Practice for Retail Credit Risk Management and Decision Automation. 2007. 4 Abdou, Hussein A., and John Pointon. Credit Scoring, Statistical Techniques and Evaluation Criteria A Review of the Literature. Intelligent Systems in Accounting, Finance and Management, vol. 18, no. 23, Apr. 2011, pp. 5988. Wiley Online Library, doi10.1002/isaf.325. 5 Thomas, L., et al. Credit Scoring and Its Applications. Society for Industrial and Applied Mathematics, 2002. epubs.siam.org (Atypon), doi10.1137/1.9780898718317. 6 Bailey, Murray. Consumer Credit Quality Underwriting, Scoring, Fraud Prevention and Collections. White Box Publications, 2004. 7 Kossmann, R., and D. Caire. Credit Scoring Is It Right for Your Bank Paper, Feb. 2003, p. 12. The Microfinance Gateway, http//www.microfinancegateway.org/library/credit-scoring-it-right-your-bank. 11 Hasan, Kazi. Development of a Credit Scoring Model for Retail Loan Granting Financial Institutions from Frontier Markets. SSRN Scholarly Paper, ID 2821626, Social Science Research Network, 10 Aug. 2016. papers.ssrn.com, https//papers.ssrn.com/abstract2821626. 12 Hand, D. J. Good Practice in Retail Credit Scorecard Assessment. The Journal of the Operational Research Society, vol. 56, no. 9, 2005, pp. 110917. 13 Kocenda, Evzen, and Martin Vojtek. Default Predictors and Credit Scoring Models for Retail Banking. SSRN Scholarly Paper, ID 1519792, Social Science Research Network,Dec.2009.papers.ssrn.com,https//papers.ssrn.com/abstract1519792. 14 Abdou, Hussein A. Genetic Programming for Credit Scoring The Case of Egyptian Public Sector Banks. Expert Systems with Applications, vol. 36, no. 9, Nov. 2009, pp. 1140217. ScienceDirect,doi10.1016/j.eswa.2009.01.076.—. Predicting Creditworthiness in Retail Banking with Limited Scoring Data. Knowledge-Based Systems, vol. 103, no. Supplement C, July 2016, pp. 89103. ScienceDirect, doi10.1016/j.knosys.2016.03.023. 15 Orgler, Yair E. A Credit Scoring Model for Commercial Loans. Journal of Money, Credit and Banking, vol. 2, no. 4, 1970, pp. 43545. JSTOR, doi10.2307/1991095. 16 Orgler, Yair E. A Credit Scoring Model for Commercial Loans. Journal of Money, Credit and Banking, vol. 2, no. 4, 1970, pp. 43545. JSTOR, doi10.2307/1991095.—. Evaluation of Bank Consumer Loans with Credit Scoring Models. Tel-Aviv University, Department of Envirnonmental Sciences, 1971. 17 Regression Analysis – 2nd Edition. https//www.elsevier.com/books/regressionanalysis/freund/978-0-12-088597-8. Accessed 1 Feb. 2018. 18 Hand, D. J., and W. E. Henley. Statistical Classification Methods in Consumer Credit Scoring A Review. Journal of the Royal Statistical Society. Series A (Statistics in Society), vol. 160, no. 3, 1997, pp. 52341. 19 Hand, DJ, and F. Zhou. Evaluating Models for Classifying Customers in Retail Banking Collections. The Journal of the Operational Research Society, vol. 61, no. 10, 2010, pp. 154047. 20 Bekhet, Hussain Ali, and Shorouq Fathi Kamel Eletter. Credit Risk Assessment Model for Jordanian Commercial Banks Neural Scoring Approach. Review of Development Finance, vol. 4, no. 1, Jan. 2014, pp. 2028. ScienceDirect, doi10.1016/j.rdf.2014.03.002. 21 Scitovski, Sanja, and Nataa arlija. Cluster Analysis in Retail Segmentation for Credit Scoring. Croatian Operational Research Review, vol. 5, no. 2, Jan. 2015, pp. 23545. 22 Bakoben, Maha, et al. Identification of Credit Risk Based on Cluster Analysis of Account Behaviours. ArXiv1706.07466 q-Fin, Stat, May 2017. arXiv.org, http//arxiv.org/abs/1706.07466. 23 Breiman, Leo. Random Forests. Machine Learning, vol. 45, no. 1, Oct. 2001, pp. 532. link.springer.com, doi10.1023/A1010933404324. 24 Sharma, Dhruv. Improving the Art, Craft and Science of Economic Credit Risk Scorecards Using Random Forests Why Credit Scorers and Economists Should Use Random Forests. SSRN Scholarly Paper, ID 1861535, Social Science Research Network, 9 June 2011.papers.ssrn.com, https//papers.ssrn.com/abstract1861535. 25 Sullivan, Hue, et al. Machine Learning for Credit Scoring Improving Logistic Regression with Non-Linear Decision Tree Effects. July 2017 26 Zhang, Zhongheng. Introduction to Machine Learning K-Nearest Neighbors. Annals of Translational Medicine, vol. 4, no. 11, June 2016. PubMed Central, doi10.21037/atm.2016.03.37. 27 Guo, Gongde, et al. KNN Model-Based Approach in Classification. On The Move to Meaningful Internet Systems 2003 CoopIS, DOA, and ODBASE, Springer, Berlin, Heidelberg, 2003, pp. 98696. link.springer.com, doi10.1007/978-3-540-39964-3_62. 28 Henley, William, and David Hand. Construction of a K- Nearest-Neighbor Credit-Scoring System. IMA J Math Appl Bus Ind, vol. 8, Apr. 1997. ResearchGate, doi10.1093/imaman/8.4.305. 29 Hossin, Mohammad, and Sulaiman M.N. A Review on Evaluation Metrics for Data Classification Evaluations. International Journal of Data Mining Knowledge Management Process, vol. 5, Mar. 2015, pp. 01-11. ResearchGate, doi10.5121/ijdkp.2015.5201. 30 Zhang, Yifeng, and Siddhartha Bhattacharyya. Genetic Programming in Classifying Large-Scale Data An Ensemble Method. Information Sciences, vol. 163, no. 1, June 2004, pp. 85101. ScienceDirect, doi10.1016/j.ins.2003.03.028. 33 Metz, Charles E. Basic Principles of ROC Analysis. Seminars in Nuclear Medicine, vol. 8, no. 4, Oct. 1978, pp. 28398. ScienceDirect, doi10.1016/S0001-2998(78)80014-2. 34 X. Ling, Charles, et al. AUC A Better Measure than Accuracy in Comparing Learning Algorithms. Canadian Conference on AI, 2003, pp. 32941. ResearchGate, doi10.1007/3-540-44886-1_25. 35 Brandenburger, Thomas, and Alfred Furth. Cumulative Gains Model Quality Metric. Advances in Decision Sciences, 2009, doi10.1155/2009/868215. 36.SAGE Encyclopedia of Educational Research, Measurement, and Evaluation, SAGE Publications, Inc., 2018. Crossref, doi10.4135/9781506326139.n747. 37 Sachan, Lalit. Logistic Regression vs Decision Trees vs SVMPartII.EdvancerEduventures.https//www.edvancer.in/logistic-regression-vs-decision-trees-vs-svm-part2/. Accessed 14 July 2018. An asset that the lender take from a borrower to secure a loan. (Investopedia.com) Type 1 error is called as false positive i.e. class which are false is considered as true (Investopedia.com) Type 2 error is called as false negative i.e. class which is false is considered as true (Investopedia.com) When one variable is in relation with another variable (Investopedia.com) moninc monthly income s)I(5EoaKV0kQ)y4XC3tHGpEO7g(RLJWt2NbSy)tPJEnMZ,YJ6o6H ia-I)qOd0ASifVmtdOs,_7z 01JE

3kpHg.7et5xxuHww)BE 3(fN5V(5i0kfc-Liq1bE6)bH-X,e m (yFXe@Y(P28K8H(hz5HrzvOJUwUR)4_OE.OFw3IKR3YmuGedyhiSn5L3Kg) -3ri2Svb)

AEWHH9L4eHpHL22ABJR22Pndk4Bpsh4LEj dd emeYy.o PlW_gA-N4smb2E4khOH42rmvmDqg YD nOh-k(IgPihDkO8jH9Z q0.pRO8gihM.gEhE )1SDEp

o wQ61lRe_gVUTt3eFTy5cNw04lq_MNH/g__qiQb/y4j1lTmjCCPd/q)IV4v E

B6.mWc8D

Ry.I

)02I_F-UHH/M3SJ5RWfNGEcBX8jukQdO sEP 4HZ3gQfwsadphF)IadpTs02MHjtugLJvuRID,RHwzgD @2U)(Pm IqHqNHH,

58AFjPJZJ6TCrmcIbo7HdmBrzNA)idJIkprt,HF DSgvIH.RHJHFEgyMEjJyWJIk@gEE5rdev@EhhqkHMY-Q,adpnQadp B4t6EF 0/KefTgbqXFuRgV8,RgJEbQ1Q4Rb9EIfsG1H,JJ6J7g(77).,RwxjsHQXX0rHf7qEIjV77Cod-RnmGiTK(,JIkoWJNXFSHv9)0JHQXxhJH,F_EFHEFq_9H,RlWEB9AFH1EBc4cUDpF5JD/D9yMmRW

E4pjo_ 5qy(AMFwHMy8

1HumO4

HN_inPnnOyqG)FYjTxE)FMoOOP_lw(-un /gYE) 2/7wAuUDwsqPb4

Cy(GORHIPG.u ir1RIXtulGT6adHpr4

(.)Iln__IH)XEYHHZLemOmkcI@0HiPYz9NIH/qvdH5zHHEuD YNr)Theqa16mZTMO2HZsG8/Mm/I7saNFd/6nj3X6Ra5NXgKY8@G45rb6HQHXHHjHa(jHDEYZCr80F2 IHSnA I7tVZW HZX ITXxHH0LZ(G0 6ntbDExQ dFWXBEBHQ aDEJQ Q (DV)(rEz3dfRSd3oOB5jE2BSSdhB dZjQflFdRdlOlB6.1e73fEf.G8NNNO.ZmCY wT(c60Hr_LMOm 9r6@jY1s05kK1fX,cIv bEfRon.D7nP onLyjFtlOykf,1NZHtoL)77gezsc_ CKDQCf /nHMHom1e0sKyCDsR-Ms cnV4u4MOmFCK

xLYslXF07.tans2xnrS8.XARR/CGjeQHLppnvnRS0J UaNTy/7EaCV9suQN0ZS)ooofskGU5wbka)VNVvULcY57Qukd5Fc, ORdnpR/1HtKMDQdA3)NlBd1nv5Y8l-_b 7n65RuY3hnv sGFHg 7VmGPIf@BL_8MRyFNswlNEXMagI96js7ITBXMs5kFf(/HI3Mwz(DGSvgnmlYCMS,,lMREfs3fsv,9WuI(,-R87S R987gRVL,fmf,17T)d7K_gRq7ktx60OcsoJd0_(F7nD.GWdduFSP9,vP3WJ6b/laV

HfDmWbtMaOUl_-,(xOcMWldoKoSw ,4qM07(LDab

QHF01(LDab

QHF01e2e

Xs@XM(MSEo.XbAGbWEyBCwHyAkeUOfwEoATt@222(F/ 7WuWB)L01(6EQd LcfEgc78(51hL.(2-OLj_v69Kh4lLljbWbcc)2bF4nLxe1eRdnMZ g2B/)W Ox0EVl2m)6sXk/Y8OQ qqq)uH4Nq@

XETO(D,B3J96l@Hqb.@zEInbM@WLeIkb@jEIhb@wAsIbb,ATFXbud)vE oRFRH2317nP KHJP2713wm@

YC71OyAoVBG ,2UCEJXy Z

i 1QuMD6UMLHJwHhbh1XOExy_L2MjLDEdFLtHLlSyR8iRQI5MLYT(OTwsWWf(5-ie@MF(2kfKhblkCXFtt4EfImZa1KBqyVPZZAZyB9CXTAAg4ab)Tud)SGoeft)sb@CsbsF MHKD1HVkDfLy/EvcVJ3RO WRnKd01hobb_3c7sbAGmlQFQobJTdfvdmQ1yr(gDGbrDDESFXfcN1z

i

qFFKHB0LICMnsML75kEo)xiLSgbXBLWlk1z0FQ4WsJGl-R4vHYKUASe1h7oFtzKtY1fGQ01fc01(Ni1JJLqtqiWz8ccNJalnOaAh7.jPkxCx(LDab

Q1BD lfwN3y/Yoi

nbb@ V/nmNM7o6QjW/J,5WyW

WBLkWWT)8Qljur,LBm9Gyfc)saaa5,Z3cI2rc5tu)6c5..fN.zLXfxRF2eXq_8(j.Nl-bxNHfJJtM03jF,O@ S.

POY/OfX9sn-c-5Vfy3,BJA,-obxCDDGDBs )iSohg(8NW TRjMQ)QIfw699cjDQ 8ufJt79cP3

(WssspXtd3xeQWg.5 iN)-BA 3e mR Ma5FCJINs53fw)VXRprMkfokti

jqLhj8)g5h r_b@@dvLRlXF5y_x5S

vaRx 255RHZjF_L.4m,_Lf)Catt4,jf)LhLY2jXR6uidbG HMNzQ36Z3SwuY6R/Jy 3u3oGZjJJ

Sk-S23LUp_NfjFKSkq3N ,5csf7Z3MfU1)VCi4-.o3gX1Jrcz

fvZ3CiTkSi4jDRj__DCys,NKNMZ@BoJ@2LBVeCDRSj6eRvAYR3QQqvvFH0k

d44M6NEleENlIHxAqT

qttdjkW

KcpTK70@jmv@3Ko 6,A-y2eG(9Q9bKTcxJdccXIhJ.B@cG

CC/wvrJJJO ty0f)(1QQQ5,Y1ipE)xCwwwH9)1o/(k0t mv,1K7SFRw,_0-gQd4i4lIEH XsbRBXPN

pESGzWZa_TPQax7)OcKU988q1N08Yc0@4M9zLU6zzSXfqg2FfLkR0O5)lyTz3fz_WI5k-S0MU))gggl)dXjjjMP-cJ,8 )RrcslVTZd)9rwP0MP YlylBG4i4VcKZ.S /) Rm5

M6wZbGS6DsT-juW.yM4 0yLHDL yLDRD (ZoJ(QBXJrppl2JIIQd0mR-d0KGQbzZrs._7I6zW)c9G4S5uCk.,LoVYcz5lmIphYgFglHTQ/TFeQT5j5XgNt7g(bzpHoFs,/K Q(s5

fb

Du9sPoYYWayH6oCDJNRb-YvTTI )0/UT_rRRR rz. pRjU(o8C0

bB9HCxx8Z3g(K.QdqO.@wwigNfQFY(Sd O7y3- jvXHH4NeAM/[email protected],oF-uiew2IZ eHKXt.)t(nAR3ZCE,_Fan.PvaTzXg G/wMf@eD

Fv-/Y9YytmiSIJ,)hj c 6r

wU.D zKv1F2lp(

Dz

dqd CCF70HR,5fW6yH_c@O/4vKlsbJzn13g5/sJrnnn(MaFMvJ-T4nT,7xCh(BW4dlmxj6CEX,2R8JZZmll, AIJ(NmnW M4fvbhcnE9srNU E8CXxUFF_Cl4gs/gaDD@m0awE/irhqS/qwSW_rsrTkhMhDo1yOOL,(mUZuej1-txICFrdcZwF2t//g FXY8C,9,7k4@ FL@4C 4 B I tB /vS)Sf464aONLK919kf

zb.suus rscuNghzZchDJtC-fs4l- 6cM)cooJvYnJH5Ozxx1c.FC HdcwR/ZhswuwDcpkd__4 V16lk,xgNILVRoAqHzLhhkSRrHcjcmOgsy((GA(qnC 5Vkwj7KlB5giiQQ(R

—mqir5dpYLk,8_ IQba/vcp FdTv8-9SlOHdUsHJezBqlE5 YaMjqqq4

sctZYoZ,/6_Uaz)bK-ZtyfzO4Ur- 1

Mp7xwwwtcs9Vv_g @1Nw5..weIKM qO 1DF

7pxegmSYezCKd

EUGBGRzsTzDDuS87Rq,yxX.zz25 )h(8yxIOXmN/6MFNiTR7FMc./_/-5URkKm_uo9H TsGr6c8uo4EpEhS6qtiki66oQl4oNLtOOKB97 VnF2/MN

fLciByXokSmYv_WrZ4sUod1M8U@ GS(RCO1KWsoWf.q/Z(1D6/NZX .X

Xz1HI6USwVfAM/n-zrObKi_1M/bh)qU3GshR6nwRvvg9ZsSR6wDNX. BpgggZQFqPBluGUN.ecoQL/6txqEZOM_u/WX0vTZxGvIuiVnMHNg)maAmaUkIciTIUtzroaqNPU1(XiXD R-g@1 opnMt1gywVFnXynSOSCRBmoF2UfX2NTvdv(_XNGY

-6Pg1o7/HzmvsV__tO In5_ /,knGY/8TG9dd/(bQUWKlk v1clWl jf.)_W.xLv1 Zkr94ZyDDgQZ7 oqG,Rbg(ItRUi OF6gUqcHzRo6)8Zf/brgw-CDzdyRiU0(Qtlm 7sy5vIKsOA/C22X7wQR1cv/kzmb 7hBWbDpbwkWTtGxglKiSU5aOEoh-LIE9mstwskHYb-UUM1A l1ZMNmV5

vFnNvI_d3zq7N_jr0wfqRXGV_6M8 nouJyvWl-Nw.ReqYj1 BmsHk6Eo Wd2h hXFvKU9mI Oc5o .p 7c5NK5ki

NEQ5)KUWE/HEgkQPoUOWYvzPShzkngE,35w_NhXUQO@Ddf14tun_5PC6z(sN2kr)mjFGfG(WwDDoeCruzscI6-ZEqTpuPmMwsv 8Eg5,zK6rm636vavPDrVeimE.1drM8xmky7.k.yyMuqF EPlchcph),J.sHwlNURg2R9j89V(s9fopkfuHNUI4QiSNkqbX,_-2wd_c2N/QY6wVVVy3kvvCb-DIYaGF.1AqVs

9uflmbs,l/gcL9.e7oyp NGx9vDDg26.8ssVcz_578WYobSRkMg79y6sbqO7s)WD,IO9xHTXsjatRNxv2hZmv3DlE1opkAejW1CwQz Dw3ntJ8fO ) u0Fy O_3yLpSIG _oL2_cK.zD7wpIGb3OuqtHq7bJR RpTuVxr 6,aMx36(vdvcwXS0.DvHnl2c)VdG3KdU4P9toviYf IMlpz9)S(v63_IfK_-J77o.X/U/nvN,kgL1gv/wviYYD1g9sWHmve5jWXq TNMUjUKzg9tf9z/h-_Xhwm8xeJJVAl0I

e@rkr/995_EuA59cpj6SZQj

R,,@DjdyJN8l6Sz,ID,PV1Q-SWy to hrazmZ-Qhv,SBiTOyydiMeZ3KYVdVa5yBC7PViX3l6B0L7I(wp33)PVtkmJ-W0WSvyqx3ZB/KDDb)OIg@NZU-cOU

QnKDoeHZkZDdRAO Tm nW/Mf6HlVVVN ODKwID4rrt0o_dyfmE@ND5GGjb.

OK4QTTTigKtpGq5eW F(z,13dGPlsoCuOfktK6 ADtDhAzsdekVfk,ye/ADCNHGfPZmnAD r bCjxmuy0WMD1kQRTorX7OJDZx-5ZMCWW97NXHUiom [email protected],v0D4F

Vwm.WoGenD

9em yfUQKHWY5OD /DgPeO Q-z4cN_loRO0 x233-,,pOqXphL/yv_fYR9 Fy6byu(pF_o-IltMWSSsfTlH3rohR3O5j j/ cnAbhXUEF-V2RD@D. y7y9e_ dSz mupnWXqKaZDVie

J8EJDxBD/R_oR(DBBBjjQ LDwBaYq(CZp-H.,AH H t.poUixZ2kbWd.D3g@CfkctRXuC_o)omdbEuGveGWGmL049IDuDw,_ sx,(F51 RGW kBkI9V.stBGW_Mj ve-VDz8m

OGyZ 8k7__bKQc/.k/8f-DU-j2SkYF)u,r@_sy-sxHDQoCsIl-HEENFlDhEO55(00JLTJDiiieNG9bkAraREf1,-KFldp1gOp YU vBjERHx3Svii0odyw19gGwjsM13Gfjr_Zuj HhQ(3INDgf

xrhVjKo-xDzhFfqm2loGTo.-y5,k7__bKQc/.k/8f-3JY76CIkdFde_bkY8wEqS4YJHmU_V@lyaR3

2Gy_AADBY5HDQT3ErHHHHMMx

,MmH2vD4zQZXT/OKoKm/

77z-wptNFsI6y1 SY0yZ.VgOoyPTNaeNbuRoMCsFW(2JUk0o-o2 HzKSaqu4D sk/fxztM4a/e–5v))eaOrk5TlOJmMD,U-ikZkIrHCsPg2gcWk-5vZ85M6yUb

Gv9neNUFg9uxXA(/cYM, m dj/keDR-R RGW jZKkuDRWnxWfnOD/. g.9aW0DxlLvo75O7 25izxk_-k2ef5_628rubu-A O,82d bu 8z4a_E)3uo5gP9PEUiGVgUWqm/UBdf nn45h6XuuEQTU-(W/ 6T7T(m)XhSV(U,svzi5SFkEgXC4TZjRMG6pDtzWl1gh_DyDbE3uJc _64ZKG6z6 v7w4UGHAw524y_DPv7/ZEYxN9QTTbf(geVEBe9CQtS xybW7z1GMOuds)FN..W F .lHnU5 @51voHTH /7S,(54t.RIJ._oSX4JnwhYsVptSMVy4G .)k pvF1mvM,3H6T

ZD@bA-GG

B

RHA9.QD0JqG.QEmtQD8XW)tl3Bi1DpKVRNPAHDgtX 1sZ2Dq5qJ0 4DQy3OHY-MMvvD7K.fJCfcgaHCoB5.vKHjReeF cgMnniRftQ94kb)mAk8Ch)4i3w) iu49A@Ug@xFCRwtj_idTTYCChUatDZA4M@W)nD,/kQl2Dn)Solol@6EsGfoZM9E-aZfd @mQFMZIsPy7Vh1XRHYXVoUmteHDla1LSkY yP3aDlm0UcBwafQHScWKl,hl

)EjM fBk,YEyrB,-,iH8h/u-C0-ROY9m0o

-t yWFHw(D6evs0ZLIZrQ_jDIkxFA5l, yZ-mD8b.maoq9_9kDn/g2TFEHuUiVgelxDDcMh I,DHd1kEGmHQGH_7Lqg

K(He kDu0gs HagEwj.tHxtI

Z,MXPckGDgyH/RS [email protected]

ZZEzz,HT-9)qR4HFKmordZF Fsjp,nZ,H5DTZ5Ghie4LH9xD/0U5XaRkHaC0e

gJmEXL5cRUU mB4UHIE.Y4UK.QEJ EJER5ABz_EJg28A0pFkKP aKmSUNyBKWMAK3,ZyCa)T@bGh)llG(RE-E(OKlef -M hKEkHoHgRGI,E(qf_ZhQlB0tv kllHFGfmU qzfMH5y4) (,L)kaQ2wz)ytbEHg,DE0DueObG0iDz8r@E HqNLjHqGvaJLjTXVcH5pdw hnH1G9m

h5Ku jyZj(wD2AKmip- ZjZ3QGVdFB0LZji-Q)9E

bIXTEP)D0i)igD4BJH(TRYJH(TRYJH(TRYJ HgX,KU0TdvlRLQ1rI)hxluM2KeUeyLN qM

R3 nIWpQ@_tymGuUQ.i-/(.WVC3ihZKnHrirEj6jb-hfT R xXODKh

re1N4@v 1clRYEJ4jDXxZQccII reZ65nm2YBtI-B xTk,sRrP-Z9h2W5QTF

TWKTaA@mBA uEaWy_jE6MhhcDd1OzYckWd5c

50YLfVNUZuM

YWpVc0EH1eZm.RcoO9 JRraYrGz/ mZ40JGy3-m9J_HeO

Sy(W1kQ( Zp-J/WZDF(/Zmu6xoIHgNApp-

SSSdY8xTCl6M_Hz,HsvZ6oPJj_cq,G)ip-w

jJfWXCjafEtoGkpETAF 2_8669k17mS.uCPRR_/,5Rs-z54mXIZ(PesK,kXwx bj

b.Zj ss,1Z4l1qnR82v5qu_kbd35b-uIjhM.YB8iHfY9y [email protected] rJRDX5bpW)IW-VZ,f./_D_5 8 s3Zy j121s7,kcyQCk vOLJZJ hhas HC-ktKsn 9hze.HO-A-nWlrsY/Sjm.QPnUhAu6Jb IO-flBPawiKr.yYfriP6z(. nXXJy99IRL7oTduJnMk(LyQz@xAO89A)AG9hXu(((6mPigQs

xTUWrU56JOGf x@8wAA

E_UFv .WL

hdA8f1ll75 .-eIM9FgCOw.CobInFU9iZL-5Fx2eOhh7Q(6gQT3)8,9…Yj(iSNp-p_x(6CpYP8yP@S J/gaPPTJyypptV-/sej)ds5KBjB6,QFEGGUVejSR_D9Wc z,uK@WO(BpjtwGF9RRM.T15hE-N@E 5-M6oLuuueXAn/9CaArd1/OOOJ/O6(S0bEUL7o73(S2N(Z5M6WUE_PFUzf4wT

-RF_25rD/FPFBER95525q yxYCn(Q9b9rr.yrH1gg_fb/e4US@TkOiPbc Jl 8/..vppY1B5nYyHMMocAWWKK/iSiiZriTuc0tktnY7RnJTmx5dweVw5h w.L5/XTbwxO0t0uBdjmv.Irh7Gh_emu P)7xTpp@L4jF,Z8efp N5izdp5xU-95kL/sr

(E111t/hQCCCZL5n@1BMzHBBEJpiPLqxJm_vg0BHSxA/ADF5nt7n,98 j3G0)QbW/rX3x9x 8fXjSHScC7_jGrhD rP7QcT_6sX0E,ZqKUl1r0Gdjle_@W/JDNjfjyTRuKzHNN1)C,ZCL)R1j nox,Q,Q,QoGEE

nJj,Q/d3gzBW JKlYDM U9DSBX 25_jNtn/cnJMdpuw2M-K-8 jj,YBM,wC54FzEww_bgxyo-3WMwPL

d5Ot,QT_HL0 )pqw9QgNY

5ow) ,T).,W/e(NJ/S8.v()Gs/xf4Ed2X,FgB- 39v-VpdM5JArbb LkSGJMjoFFjFFjFqZpFjUqHK.f ,mLwdilR_Kwldw2

s)5 uodc5R mlLxxmWFAbbj(I3g2 A6 @d.7dbbbb bcgmXY9 Z ,,WWb

vmg/ KKAb@/2/4ggg7d4TT3grxFNND6Dn( v.j7oM6rx ,.U-S5FsSaSjaYSbTX- yI1c22G iSQ8wy5OqM4l-UWzOoXPR44 ATA37oAqX5V ,M6lK006O8oNpoVVV7o,((@r h_tb)di 1COsyyy .DdLMU.uUVQFK(-p FFvGe_Qc5RFQgjC 3/xUS(b-K))9aVi4BO5GLceGA9FcCoi isEjWJg)Sodi59eiCerhw pB5FD J23n/X,Mj e-1mw@jDFu4lH2uTAX,Vm3e8L5hxBYd0,,,jyhfLL,56ia6QQBAL_lllb–Fc5N6ljT q.Ev9AOHH/))J)RGLNNFRsKf1JczfXLNVA7Vf9FFFKX [email protected]_rrwDeX5juhYC76m_QJ 2dr0

5j6lP4/YPkViKDxnPFdvQxCY151IiMa1r8X7RFJko

Ne0kBghogo9At9-_m-E8.ib

ey9pr1K

R.2M-59/1l6229u -Z /7ovBPlhhDA8a@ce7zIS

PQj95krn2kkbl )) e0@,Y0.60l

jFXO9@/ucM2dY405ITSl3

mYmz3_G5JjMZ(PQ2P5RtFYlHU_ [email protected]

(hT52HJJJ8WFr5Xoi@755)WcIAJ8Hu9K9WM4jF

S,P jLO8 Ff5RK9kAAYiV2HjIe

Uhy3r sW/O8 q,p FXbSAt0k,dM69LmAoPKVPm

phcwIb@0D hk/6/T@N8Ph

kgwPee@,)FFy5s WsN8DKEEXRR)-Znb0jDK5535_119H,u90LWWc0@JH/3gJ5Ix)4A99Mf cA9rHrD

@g. ALgzbr53bCX,IfgLzANKF07z_LI52i)9QE0C9Yt GQ8oR 2Bh9otiDlX.2x0T FIakNv, k7FLiQrM0Fy66qtBPhu,GIWhhunK4tF,@Zen

BgD tvIjM0SLAcUd-UVQtCLYm hcJZRKXgDeegTzQxEGI@-z0(GQRC,sVmwU0f3aUWR/wF qvr8 PLl 79_AOnx3_glYOjYDFSFFjFFjFFjFFjFFjF8qro,(_f-ed 0,Ov@jlgwt5j7ezz6h5KFt1q1q1q1q1q0r0L9ph084b0S1)FC00r0L9ph084b0S1)FiQQQzT (y()iuv_ lfF_8yjuyGZOdUdmptvoot_67UoSjZ_I-DERoM5RP.SpUbH5lfAawAiWr7@VvAy

,Zp_E5Ep(FYyfzw4Ybl_oseyXv G,FDIE5Be B/Wf2c/12Ath5 iEHgpZBhd2Jn/Tuh( uEEp

I_g)53Vg8HwVgmfZK5f 5ywEhJUMoF97qaXM((o-uUU)Vqfz5M_) rXyj1ofSgS-(SgKEX_x 5WR

u1Oht hEWX6PWO xS.yx@Mu5(((3lJY)LZ10p)n6GTFmGIgFQ( fYn4Zgm

vph984J3UP66bO r_zUW D/sTTPNP4HwPefh

OWpZ

w_wsJLW,rY6qAx(ayKSW2xraG,lYM@ .o-5Kw-UAJ3tC l6csz )L /9t2F)z.ozuK7(FFsFAi.xdh, lWKhdGG.zbPK_wM9g@SzCeG_Rfm fOtSNME3 9bGGZ84R@o nlzwO__sTKyl24n,YXwF i zSMa L8jbY8hNw(FK,rppuBCCU Tsv)W.fH5R9gS0 XYFXGNvpz5xZAA0h039NXlsD9KHCP.U24TNr-fEUkMZ4M3UyFVVk8Z_.4bG))gEMFTUU)rhvVVVoq lyAgXY1)KYp T1YAyO1e _4G0bSbB8uqpPp_x 4xP)rh-ZUWpmRtkW2(47f3 gh@hH tn1Ni0 NLc4uI_e1W.nPzAd wKgu 9

[email protected] zz8T

6e9_NMBB,mdxAr9mu584Caq1h,

8wXt@S3C7nwqmQQQG9L4/F

thduF9854

CB

ap0U1D j7UU/ M6dUlFpDKWe1stwjqQ84b@_A5gCK-Wxd__4AOmddpl8Si4WHXhhNYgwhvmS@Dq1 OQQQaaPPPb

IheGosTbQtBwFLHH u K0/am Fh NE57XlvryD(FSY- sq47/i4@,lAbbbPa86III Yl6mmWRphCYUa2 H.7nnO6 dA/ @hB x8x)oDhAe1L9Cz F@v35V000y2 38WPBMAXA 9 XYlQ _HN7C

cnngDJGj/hexrjpIHwIYiihpVo766HU-kqvrYmKGK E ,–W

,gf75u J4zcaNuv -OP(w/b7LMoE7Pw4QZ)rrrOnll/E4nXxEItkN1ft-qq h((PI.hnOW6.b GeQ4Q_/mFVk165B A.ZUX ZZgW/sfkpegP4XOj2oT8VuvqsqVkwZd@ Yk42K(zQgRZtBDmI@e/1TiS661o

VBm/UBaJY_zZLMdeZ6Z-vRkFnb_m/okJIVV/douY9B qoPgrX0Fwa3XH l.lcCxGGpv47I h3mk_Qa4Uk),binfxp4S0ydH3w54s0foI_2n-bbbvdDSxrxtdyVSmihv.)ImOe/)9WNhNMqxT-vf@g DEgFG Y, B8L 1(IzZYrH9pd4n(KgVB,lDAeX)Ly5otebW3gpj/gQjZTae9i5j5fE514g7vnO( ,[email protected] /e5sZWfPtfkA0zUw@tAm4T2j 6Q