Hello. 🙂 I am back with a post about forecasting, a subject I have always, carefully tried to avoid … for personal reasons. 😀
Forecasting is the science (or an art for some:) of analysing trends and estimating future outcome probabilities based on historical statistics.
If a trend is thought to be linear, we analyse it as a regression. If the trend is not linear other tests are preferred like Kendall’s rank correlation, and/or other smoothing techniques. Descriptive statistics is an important exploratory tool to understand what kind of distributions we are dealing with.
Data about Apple Inc share prices was collected (more precisely, date, open, high, low, close, adjusted closure and volume of shares traded on the market daily from 01.01.2017 to the last trading day available before this post that is 09.06.2017) from Yahoo finance.
Once data is collected, cleaned (if necessary), normalised(if necessary), scaled (if necessary), etcetera, we are ready to perform descriptive statistical analysis, that will help us investigate further some issues of more critical interest.
|Variable #2 (Apple Inc. Share Price – Open)|
|Mean LCL||136,0978||Third Moment||-771,31834|
|Mean UCL||140,62558||Fourth Moment||46.172,3278|
|Mean Standard Error||1,14224||Sum Standard Error||125,64685|
|Coefficient of Variation||0,08658||Total Sum Squares||2.121.478,93681|
|Adjusted Sum Squares||15.643,61058|
|Percentile 25% (Q1)||132,06925||Skewness Standard Error||0,22834|
|Percentile 75% (Q3)||144,869||Kurtosis||2,28293|
|IQR||12,79975||Kurtosis Standard Error||0,44452|
|MAD (Median Absolute Deviation)||0,702||Skewness (Fisher’s)||-0,46111|
|Coefficient of Dispersion (COD)||0,06588||Kurtosis (Fisher’s)||-0,69417|
Here the software has automatically sampled the opening price ranges (“$110 to $115”, “$115 to $120”, …).When the statistical package samples for you, you may easily get quasi-normal distributions. If you are familiar with programming, you may define the number of ranges your samples should fit in. However one should be careful when defining ranges because scientists and statisticians may often be biased by what they want to find. So setting the number of ranges may be dangerous, unless a careful methodological study has been run on sampling and methodologically-sound reasons exist for sampling and ranging that way.
Descriptive statistics and plots are produced for every variable of interest.
Being able to interpret the statistics is a huge plus at every stage. Unfortunately I cannot dig into the statistics now, as I still need to get to the gist of this post that is forecasting and predictive models.
One should however appreciate that whilst many think of data visualisation as an achievement, data science professionals use visualisation as an exploratory tool for their analysis.
For example, in the following graph we immediately see a fat tail event happening at point 21 of our list of observations: it seems that 2017-02-01 was a very good day for Apple Inc. as they traded a huge volume of shares, 111,985,000 (in one day!).
If you consider the opening price of one share was $125.96 on that day, I let you do the math to figure out of how much Apple Inc. earns – or how much money they move in one day :O) [Note though that our math will be biased because prices are always fluctuating. To get smaller errors we would need to collect data on smaller time-frames (i.e. say every 5 minutes)].
Being capable of reading a graph can sometimes be a life-saviour or it can be a time-saviour (which indeed is a life-saviour).
When building a predictive model on a list of observations, a statistical package will return the residuals. The residuals are the differences between the predicted values and the actual values. when I built the linear regression model fitting the Apple Inc. data collected, the system generated the following plots for residuals.
In the figure on the lower left corner (titled Residuals), at the end of the x-axes, observation 110 deeps very much down. This is not human error (in handling data), nor it is a statistical error of the package (whilst in some cases both may happen).
In fact here, the linear model output a prediction that is significantly lower than the real value.The residual is the difference between the estimated (predicted) value and the real value. One can see in the following table, the biggest residual is in observation 110.
Real Value Predicted Value Residual
|Observation||116.150.002||Predicted Y||Residual||Standardized [Excel]||Studentized||Deleted t||Leverage||Cook’s D||DFIT||PRESS|
A Linear Model
The null hypothesis is that there is no relation between high and low prices, and volume of shares sold in a day. (it’s a naive assumption but it is just for the sake of testing the model).
Let us run the model to see how it fits the data collected
As you can see in the following table, it turns out from R, R-squared and Adjusted R-Squared that the linear regression model generated predicts up to 99% of the cases, which as you might understand is quite good!!! (especially because the model is built on as few as 110 observations ). Imagine building it on years of data correlated micro-scopically 😀 thrilling!
As the ANOVA box shows, the p-value is 0. When the p-value is 0, the null hypothesis is rejected. As a matter of fact, high and low prices absolutely have an impact on the volume of shares sold. We know that from economic theory too.
|116.150.002 = 7,47465 + 0,9517 * 114.826.157|
|Coefficient||Standard Error||LCL||UCL||t Stat||p-value||H0 (5%)|
|LCL – Lower limit of the 95% confidence interval|
|UCL – Upper limit of the 95% confidence interval|
As we saw, the linear regression model is able to explain 99% of the cases, meaning that statistically, it is a good one.
Whilst statistically fit, this linear model is financially inefficient. As we have seen a residual of – $6 per share on 1 day could be very bad -depending on whether you are long or short on that asset – and it could be terrible if you own a lot of shares.
A predictive model for financial price fluctuation should consider (applying) the following:
- Big Data Analysis
- Real-Time Data Analysis (or analysis at the shortest possible timeframe)
- Asset Correlation Analysis
One such model should consider that financial assets in a portfolio correlate, so losses on one asset may not necessarly give a very negative outlook to overall portfolio earnings.
Thank you for reading. More will come. 🙂