This conversation brings together statistical scientists and scholars that study film. What gathers us are two things. First, we are driven by mutual curiosity about cinemetrics as a field. What can numbers tell us about films and how do films fit in with what we know about numbers? Another thing we hope to find out has to do with Cinemetrics as a site. What variables should Cinemetrics make available to its users and which statistical tools need to be added to Cinemetrics labs? We plan to tackle these questions in a series of notes posted here starting from now through spring 2013.
Let me start off by introducing the team. My name is Yuri Tsivian, I study film, teach it at the University of Chicago and, in tandem with computer scientist Gunars Civjans, run the site that hosts this conversation. Beside me are two film scholars, Barry Salt of London Film School who pioneered the discipline of film statistics in 1974 and whose personal database and multiple essays are found elsewhere on this website, and Nick Redfern whose own website features over 50 cinemetrics studies and reflections. On the other side are two academic statisticians, Mike Baxter of Nottingham Trent University who has been publishing in statistical archaeology and quantitative geography since late 1970s and whose more recent interest in film statistics resulted in 3 essays on the subject, and Vanja Dukic of the University of Colorado at Boulder who happened to be around when Cinemetrics was born in 2005 and to whose expertise this site owes its first statistical steps.
The way I would like this conversation to evolve is round by round. To give it a sense (or semblance) of direction I will start each round by posing a question about this or that aspect of statistical films studies which our four experts might use as a starting point. Here is an approximate plot which is quite likely to change as new questions arise in the course of the conversation. My first question (of which more later) is about the role of ASLs, medians and outliers. This subject may well lead us to questions about lognormality tests which will ring in the second round. We may go on from there to the 3rd question which would relate to whether parametric or nonparametric statistics works better for films. The 4rth question might be about autocorrelation or other possible methods to establish cases in which shots tend to cluster, and if there is periodicity to this. We may then want to discuss the uses of descriptive, inferential and experimental statistics in film studies; I would also be interested in learning more about best ways to establish possible correlations between different variables of film style. We might then go on to the question of how to visualize data, for instance, whether old good bar plots work well enough to represent the shot scale profile of a motion picture. Again, all this is just a scheme which we may either flesh out or send the way of all flesh.
A good place to start is by taking stock of statistical evidence in use. Is the average shot length (ASL, the first variable film scholars normally look at) the best way of contrasting and comparing cutting rates across films?
The question has two sides to it which I want to outline briefly before I hand it over to you four. On the one hand, ASL has worked for generations of film scholars dating back to one day in 1916 when Harvard psychologist Hugo Muensterberg walked into a movie theater, looked at his pocket watch, counted some shots (then called "scenes"), calculated the mean and came up with the following diagnosis:
If the scene changes too often and no movement is carried on without a break, the [photo]play may irritate us by its nervous jerking from place to place. Near the end of the Theda Bara edition of Carmen [1915] the scene changed one hundred and seventy times in ten minutes, an average of a little more than three seconds for each scene. We follow Don José and Carmen and the toreador in ever new phases of the dramatic action and are constantly carried back to Don José's home village where his mother waits for him. There indeed the dramatic tension has an element of nervousness, in contrast to the Geraldine Farrar version of Carmen [1915] which allows a more unbroken development of the single action.
(Hugo Muensterberg, A Photoplay: a Psychological Study (New York, London: D.Appleton and Company, 1916), p. 456.)
On the other hand, a number of modernday statisticians tend to question the effectiveness of the arithmetic mean on the grounds of its being too sensitive to outliers. Their position is neatly summarized in Nick Redfern's latest study "Statistics and the Analysis of Film Style" (though the study is yet unpublished, I have Nick's kind permission to use parts of it in this conversation):
The most commonly cited statistic is the 'average shot length' (ASL). The ASL is typically described as 'the length of the film in seconds (feet) divided by the number of shots in it.' Clearly, in this context the 'average' referred to is the mean. Unfortunately, a worse choice of a statistic to describe film style could not have been made: the mean is not an appropriate measure of central tendency for a skewed data set with a number of outliers, and these are precisely the characteristics of the distribution of shot lengths. The mean shot length is used to compare how quickly two films are edited, or for comparing the cutting rate of groups of films. However, the fact the mean is not robust to deviations from normality means that these conclusions are not valid and the conclusion of the researcher will clearly be flawed.
Resting upon the opinion of statisticians whose methods have been shaped by working with nonfilm related data, what Redfern suggests is to refocus film studies from mean to median values:
The median shot length provides a simple robust alternative to the mean because it will locate the centre of any distribution irrespective of its shape and is not affected by the presence of outliers in the data . The mean is affected by outliers, being pulled away from the mass of the data in the direction of the outliers; whereas the median is not affected by the presence of outliers and is, therefore, resistant to their influence. Outliers occur in shot length distributions as shots that are exceptionally long relative to the rest of the shots in a film; and it is for this reason those researchers outside film studies used the median shot length to describe film style and not the mean shot length.
Even though Cinemetrics as a website displays both ASL and MSL (median shot length) for each submission, the question Nick Redfern raises is relevant for cinemetrics as a field. Even if some may not find there are good reasons for film studies to wipe off the slate and start everything from scratch it makes sense perhaps to use Nick's warning as an excuse to revisit our statistical toolkit or get a better sense of the nature of filmrelated data. What are outliers in our particular case? What price do we pay for ignoring outliers and what price for heeding them too much?
Recently, Barry Salt gave some thought to this in his 2011 study "The Metrics in Cinemetrics". Let me quote what Barry says about unexpectedly long shots in films like Ride Lonesome (1959):
These long take shots could reasonably be referred to as "outliers" in this particular case, but to disregard their existence in an investigation is to shut your eyes to the very thing that makes this film special.
And about ASL in general:
In any case, the mean exists as a basic characteristic of any distribution, and the ASL has been adopted by many other people as a standard measure for film statistics since I introduced it 35 years ago. This is partly because it is easy to get. You just have to know how many shots there are in a film, and the film's length, to work it out. That is how I come to have a database of nearly 10,000 ASLs from complete films, which is very useful for stylistic comparisons. I consciously chose to call it the Average Shot Length, rather than the Mean Shot Length, because I reckoned that a smaller number of the many rather innumerate people in film studies would be put off by the former name. You can only get the median by listing all the shot lengths in a film, as in Cinemetrics. It is worth remarking that if you only consider the median shot length, you can be seriously misled about the distribution of shot lengths in the film you are considering. For instance, both The Lights of New York (1928) and The New World (2005) have median shot lengths of 5.1 seconds, so on this ground alone you might think they have similar distributions, but when you look at their other features it turns out they are very different.
So, the median shot length or the average shot length or maybe both? And is there a line beyond which film studies should not ignore the presence of outliers unless of course these had been caused by measurement errors? If my summary sounds accurate enough let me submit these questions for your consideration.
A shot length distribution is a description of the data set created for a film by recording the length of each shot in seconds. Since a film typically comprises several hundred (if not thousands) of shots we need summary statistics that accurately reflect the style of a film in order to simplify our analysis and to communicate our results. The average shot length (ASL) is the most commonly cited statistic of film style, and is used to describe how quickly a film is edited with a low ASL representing a fast editing style and a high ASL indicating a slow cutting rate. By comparing the ASLs of two films we can determine if they have similar styles or if one film is edited more quickly than the other.
The ASL commonly referred to in film studies is the arithmetic mean and is equal to the sum of the data values (i.e. the total running time) divided by the number of shots. The mean is the point at which a data set is balanced, and as a 'centre of gravity' is a representative statistic of central tendency when the distribution of the data is symmetrical. However, the distribution of shot lengths in a motion picture is characterised by its lack of symmetry so that the majority of shot lengths are less than the mean due to the influence of a number of shots that are of exceptionally long duration relative to the rest of the shots in a film [1]. In statistical terms, the mean shot length is not a robust statistic of film style because it does not provide a stable description of a data set when underlying assumptions (e.g. a symmetrical normal distribution) are not met [2]. It has the worst possible breakdown point of 1/n so that just a single outlying data point can lead to the mean becoming an arbitrarily bad estimate of the centre of a data set (Wilcox 2011: 1921) [3]. Consequently, the mean shot length does not give an accurate or reliable description of a film's style and use of this statistic to compare shot length distributions inevitably leads to flawed inferences.
This lack of robustness in the presence of outliers has led several researchers  typically from outside film studies  to prefer the median shot length as a statistic of film style in place of the mean. Adams, Dorai, and Venkatesh (2002: 72) preferred the median as a measure of location because it 'provides a better estimate of the average shot length in the presence of outliers.' Similarly, Vasconcelos and Lippman (2000: 17) reject the use of the mean because it is 'well known in the statistics literature [...] that the sample mean is very sensitive to the presence of outliers in the data,' and that '[m]ore robust estimates can be achieved by replacing the sample mean by the sample median.' Kang (2003: 245) used the median shot length 'because it shows a better estimate than the average [mean] shot length in the presence of outliers' when analysing the relationship between emotion and film style. Finally, in television studies Schaefer and Martinez (2009) used the median shot length in order to study changing editing patterns in news bulletins because it provided better indicators of shot length than the means and because the means are inordinately influenced by a few outlier values from the longest shot.
The median is the middle value when shot length data is ranked by order of magnitude, so that for any film 50 per cent of shots will be less than or equal to the median and 50 per cent will be greater than or equal to the median shot length. If the data set contains an odd number of observations the median is the centre value of the order statistics. If the data set contains an even number of values the median is equal to the mean of the two middle values. Since the median is based on the ranked data rather than the data values themselves so that it locates the centre of a distribution irrespective of its shape. It has the highest possible breakdown point of 0.5, which means that half the data can take on extreme values before the median is heavily influenced (Wilcox 2011: 2224). Consequently, the median shot length is a robust statistic of film style resistant to the influence of outlying data points and accurately describes the style of a film without requiring assumptions about the underlying probability distribution of the data.
The duration of a shot in a motion picture carries information about the stylistic decisions of filmmakers and it is desirable that we retain all this information in our data set. It is often the unusual deployment of style that is of interest to the film analyst and so including this data is important for the analysis of film style. For this reason the removal of outlying data points or the trimming or winsorising of the data is undesirable. At the same time, it is necessary to ensure these unusual events do not distort our understanding of a film's style so that our description of shot length data and the conclusions we draw accurately reflect the style of a film and not the influence of a small proportion of atypically long takes.
To illustrate the difference between using the different ASLs we compare the mean and the median shot lengths of two early Hollywood sound films: Lights of New York (Bryan Foy, 1928) and Scarlet Empress (Josef von Sternberg, 1934), using data from the Cinemetrics database (O'Brien 2007, Salt 2007). Table 1 presents the descriptive statistics for these films.
Table 1 Descriptive statistics for Lights of New York (1928) and Scarlet Empress (1934)

Lights of New York 
Scarlet Empress 

Length (s) 
3329.7 
5948.4 
Shots 
338 
601 
Mean Shot Length (s) 
9.9 
9.9 
Standard Deviation (s) 
14.5 
9.6 
Skew 
3.7 
2.3 
Minimum Shot Length (s) 
0.9 
0.3 
Lower Quartile (s) 
3.0 
3.6 
Median Shot Length (s) 
5.1 
6.5 
Upper Quartile (s) 
10.2 
12.9 
Maximum Shot Length (s) 
95.6 
64.2 
Looking at the average shot lengths, we see both films have a mean shot length of 9.9 seconds; whereas the median shot length for Scarlet Empress is 1.4 seconds greater than that of Lights of New York. Therefore we can conclude that either
or that
These statements are contradictory and cannot be true at the same time since they purport to describe the same thing.
From the descriptive statistics in Table 1 we know that the distribution of shot lengths in these films is asymmetrical: both films exhibit positive skewness with a long righttail. The distance between the median and the maximum shot length is much greater than the distance between the minimum shot length and the median; while the distance from the upper quartile (Q3) to maximum is much greater than the distance between the minimum and the lower quartile (Q1). The maximum shot lengths are much greater than both the median and the mean. It is in precisely these circumstances that the mean provides a misleading description of a data set, and being aware of this should lead us to prefer the median to the mean and, therefore, to conclude that Lights of New York is cut more quickly than Scarlet Empress.
This can be more easily appreciated if we compare the shot length distributions of these films graphically. Numerical descriptions are valuable but it is often simpler and more informative to use graphical representations of shot length data to aid us in analysing film style. Boxplots are an excellent method for conveying a large amount of information about a data set quickly and clearly. The box plot provides a graphical representation of the fivenumber summary, which includes the minimum value, the lower quartile, the median, the upper quartile, and the maximum value of a data set. The core of the data is defined by the box, which covers the interquartile range (IQR) and is equal to the distance between the lower and upper quartiles, and the horizontal line within the box represents the median value of the data. The inner fences are marked by error bars extending from the box, and data points beyond these limits are classed as outliers. An outlier is defined as greater than Q3 + (IQR × 1.5) and the error bars extend to largest value within this limit; while an extreme outlier has a value greater than Q3 + (IQR × 3). Typically, there are no outliers at the low end of a shot length distribution, and the error bar descends to the value of the shortest shot in a film.
Based on the criteria stated above, Lights of New York has 33 outliers, with 22 classed as 'extreme,' covering a range of 21.2 seconds to 95.6 seconds; and Scarlet Empress has 39 outliers, including nine classed as 'extreme,' covering a range of 27.3 seconds to 64.2 seconds. These outliers comprise only a small proportion of the shots in each film: 10 per cent in the case of Lights of New York and 6 per cent in Scarlet Empress. Lights of New York has a small number of shots that exceed the maximum shot length of Scarlet Empress: specifically, there are six shots in excess of 70 seconds and the longest shot is more than 30 seconds longer than maximum shot of von Sternberg's film. Lights of New York therefore not only has more outliers but they tend to lie relatively further away from the mass of the data. The influence of outliers on the mean is obvious, and from Figure 1 we also see that, rather than locating the centre of the distribution for these films, the mean is actually greater than the majority of shot lengths. In Lights of New York the mean shot length is greater than or equal to 74 per cent of shots, and in Scarlet Empress the mean is greater than or equal to 66 per cent of the shots. It is also immediately apparent from Figure 1 that the median shot length of Lights of New York lays to the left of the median of Scarlet Empress.
Once we understand the nature of the data sets we are dealing with and how the statistics behave in these circumstances we can come to a conclusion regarding how the styles of these films differ. Clearly, using the mean shot length to compare the editing style of Lights of New York and Scarlet Empress leads the researcher to draw the wrong conclusion that both films have similar editing style. This is because of the influence of a small number of takes of long duration on the mean shot length making it an unrepresentative statistic of film style. The fivenumber summary, the interquartile range, and the box plot provide accurate and reliable descriptions of film style leading us to the correct conclusion that the duration of takes in Scarlet Empress tends to be longer than those in Lights of New York.
The mean shot length has been widely used as a statistic of film style even though it is an obviously inappropriate statistic of film style. The mean shot length does not locate the centre of a shot length distribution and is not resistant to the influence of outliers. Unfortunately, this means film scholars have been laboring under a series of misconceptions about the nature of film style due to the use of this statistic. Specifically, use of the mean shot length leads to the researcher (i) identifying differences in film style when they do not in fact exist (Type I error), (ii) failing to identify changes in film style when they do occur (Type II error), and (iii) incorrectly estimating the size of any change in style correctly identified.
These problems may be overcome by using the median shot length which is resistant to the influence of outlying data points whilst retaining the complete set of shot length for a film. The median has a clearly defined meaning that is easy to understand, and fulfils precisely the role film scholars erroneously believe the mean performs.Finally, it should be noted that the style of a motion picture cannot be adequately described using the average shot length alone and it is also necessary to use statistics that describe the variation in shot lengths. Again, it is necessary to be concerned with the nature of the data and the nature of the statistics so that we do not employ statistics that give a misleading description of film style. IN fact, the case of Lights of New York and Scarlet Empress is a perfect demonstration of the necessity of robust measures of scale. The standard deviation of the shot lengths in Lights of New York is greater than that of Scarlet Empress (14.5 seconds compared to 9.6 seconds) indicating that there is greater variation in the former than the latter. However, the interquartile ranges lead us to the opposite conclusion: the IQR of Scarlet Empress is 9.3 seconds while the IQR of Lights of New York is 7.2 seconds, indicating the latter film exhibits less variation in its shot lengths. Like the mean, the standard deviation is not robust and has a breakdown point of 1/n. The IQR is resistant to the influence of outliers since it is based on the middle 50 per cent of the data and has a breakdown point of 0.25. Use of the IQR leads us to the correct conclusion regarding the differences in styles of these films that Lights of New York exhibits less variability in its shot lengths that Scarlet Empress.
See Redfern (2010) for a discussion of robust measures of scale from which we may choose.
1. An outlier is an observation (or subset of observations) well separated from the bulk of the data, or that in some way deviates from the general pattern of the data (Maronna, Martin, & Yohai 2006: 1). Since it is possible to accurately record the length of each and every shot in a film we are concerned with true outliers (correctly observed data values distant from the mass of the data) and not with gross errors (atypical values arising through human or technological error, faulty sampling, inappropriate assumptions about populations, etc.), though obviously film scholars are not immune to the latter.
2. The robustness of a statistic refers to 'the ability of a procedure or an estimator to produce results that are insensitive to departures from ideal assumptions' (Hella 2003: 17).
3. The breakdown point of an estimator is the smallest proportion of outliers a data set can contain before it becomes unreliable.
Adams B, Dorai C, and Venkatesh S 2002 Formulating film tempo: the computational media aesthetics methodology in practice in C Dorai and S Venkatesh (eds.) Media Computing: Computational Media Aesthetics. Norwell, MA: Kluwer Academic Publishers: 5784.
Hella H 2003 On robust ESACF identification of mixed ARIMA models, PhD thesis, Bank of Finland Studies.
Kang HB 2003 Affective Contents retrieval from video with relevance feedback, in TMT Sembok, H Zaman, H Chen, S Urs, and SH Myaeng (eds.) Digital Libraries: Technology and Management of Indigenous Knowledge for Global Access. Berlin: Springer: 243252.
Maronna R, Martin D, and Yohai V 2006 Robust Statistics: Theory and Method. Chichester: John Wiley & Sons.
O'Brien C (2007) Lights of New York, Cinemetrics Database, http://www.cinemetrics.lv/movie.php?movie_ID=690, accessed 24 January 2011.
Redfern N 2010 Robust measures of scale for shot length distributions, Research into Film [blog], http://nickredfern.wordpress.com/2010/07/15/robustmeasuresofscale/, accessed 10 July 2012.
Salt B (2007) Scarlet Empress, Cinemetrics Database, http://www.cinemetrics.lv/movie.php?movie_ID=615, accessed 24 January 2011.
Shaefer RJ and Martinez TJ 2009 Trends in network news editing strategies from 1969 through 2005, Journal of Broadcasting and Electronic Media 53 (3): 347364.
Vasconcelos N and Lippman A 2000 Statistical models of video structure for content analysis and characterization IEEE Transactions on Image Processing 9 (1): 319.
Wilcox RR 2011 Modern Statistics for the Social and Behavioural Sciences: A Practical Introduction. Boca Raton, FL: CRC Press.
It appears you don't have Adobe Reader or PDF support in this web browser. Click here to download the PDF
It appears you don't have Adobe Reader or PDF support in this web browser. Click here to download the PDF
Back in 1974, when I initiated the systematic study of film style using statistics, which I called ‘Statistical Style Analysis’ (as I still do), Sight and Sound rejected an article showing my first results, which of course contained graphs and tables of numbers, although they had just published my piece putting forward a general theoretical framework for film analysis, ‘Let a Hundred Flowers Bloom’. Discussing this quite some time later with Ray Durgnat, I said to him that most people’s minds just freeze up when they see a graph. He riposted that it was worse than that, for most peoples minds freeze up when they see a decimal point. Fortunately, ‘The Statistical Style Analysis of Motion Pictures’ was published later that year by Film Quarterly, because the editor had a bit of a scientific education, before he got into writing about movies.
I was already well aware of the problem that most people have with mathematics, so although I briefly indicated in ‘Statistical Style Analysis of Motion Pictures’ what I thought was the nature of shot length distributions in feature films, I did not go further into the matter until much later, but restricted myself to only the most basic use of statistics when dealing with other areas of film style, because I was seeking the widest audience I could get. That audience might be just able to cope with the notion of the arithmetic mean of a collection of numbers, as long as it was referred to as an average, though it is doubtful if many of them could actually calculate it. Obviously I was already using the median of the shot lengths of a film back then in fitting theoretical distributions to the empirical distributions with which I was working. So I was never in any doubt that the median was a necessary measure for this purpose. But how many people can calculate it if given a set of numbers, and pencil and paper?
Nick Redfern seems to be suggesting banning the use of the concept of the Average Shot Length, but he surely can’t be serious. Such an idea seems reminiscent of the Catholic church continuing its ban on the discussion of the idea of the earth going round the sun, even after the concept was in wide use. As I have shown, in ‘The Metrics in Cinemetrics’ and elsewhere, the combination of the standard statistical measures, including the ASL and the median, can reveal major new features of the shot length distributions of feature films. The most important of these is that most sound feature films with an ASL of less than 15 seconds have a shot length distribution of roughly the same shape. Which raises a big question, why should this be so?
Another concept being put up for discussion is the idea of naming some members of a distribution as ‘outliers’, and then excluding them from consideration. I have already indicated why this is a bad idea in ‘The Metrics in Cinemetrics’. As another example, I will use the shot lengths for The Grapes of Wrath, as recorded by myself and placed on the Cinemetrics database.
That little bar on the end of the graph represents the three shots in the film longer than 100 seconds. They are actually 100 seconds, 104 seconds, and 160 seconds in length respectively. You might think they are far detached from the rest of the shots in the film, which are all shorter than 70 seconds, but actually the theoretical distribution that best approximates the actual distribution of shots predicts a certain probability of shots occurring beyond 70 seconds. The probability of any particular length of shot beyond 70 seconds is very low, but if you sum these probabilities from 70 seconds to infinity, then they say that there is likely to be between one or two shots of lengths greater than 70 seconds out there. So having three shots is more than expected, but not a lot more. Turning to the idea of leaving these three shots out of consideration, this would of course remove the uniqueness of this film. To be more specific, the 104 second long shot is the second last shot in the film, in which Ma Joad having the last word in the film by delivering her long speech ending with ‘We’re the people’ in a close twoshot showing her and Pa Joad sitting in the front seat of their truck heading towards another fruitpicking job. Her speech is actually interrupted by two short responses from Pa Joad within the shot, and these short speeches by Pa Joad could have been used to break the scene up into more shots. Another more ordinary director might well have done so, but the way John Ford chose to shoot it is one of the many details that make this film special, and indeed unique.
The series of shot lengths making up any film is also unique, and so is the distribution of lengths resulting from them. They make up the whole population with which our statistical analysis of a film deals, and are not a random sample from some larger population. So any test or method which assumes that they are part of a larger population is being misapplied. However, the shape of the distributions of shot lengths of most feature films do resemble each other, without being identical. How close they can get in shape can be illustrated by taking three films very different in ASL (and median shot length), from very different periods, and then superimposing their shot length distribution graphs. The films are Ace Ventura: Pet Detective (1995), with ASL = 4.7 sec. and Median = 2.7 sec., The 39 Steps (1935) with ASL = 9 sec and Median = 4.3, and On the Beach (1959) with ASL = 19.1 sec. and Median = 8.8 seconds. The values of shot lengths have been normalized by dividing the number of shots in each film into the frequencies for each length of shot, and the shot durations on the yaxis have been adjusted in the cases of Ace Ventura and The 39 Steps by dividing by 2.7/8.8 and 4.3/8.8 respectively. (That is, the ratio of the Median values for each of the first two films to that of the last film mentioned.) Incidentally, I could have used the ASL instead of the Median in this normalization, and got almost identical results.
The coincidence of the three graphs shows that they do indeed have almost the same shape, and the small discrepancies correspond to the general way the shape of shot length distributions changes slightly as we come up to more recent times and faster cutting. In the case of the Lognormal distribution, to which most film shot length distributions approximate if the ASL is less than somewhere around 15 seconds, the shape parameter sigma is fairly constant, and it is this that produces the coincidence of the distributions illustrated above.
I am very pleased to see that Mike Baxter’s detailed paper endorses the results and positions I have put forward in ‘The Metrics of Cinemetrics’ and elsewhere. The one part of his work I have some small doubts about is his analysis of what he calls ‘lumpy’ distributions. Of the twelve distributions he discusses in this context, not all of them look lumpy to me.
I would agree that when looking at the shot length distribution for Pursuit to Algiers directly one can see lumps:
but it does not look at all like a bimodal distribution to me. I see no second modal peak standing out from amongst the many small lumps. The only one amongst Mike Baxter’s twelve examples quoted that does have a suggestion of a real second maximum peak when we look at the actual distribution is Harvey:
One could take it, perhaps, that there is a second distribution having its mode at 16 seconds, and the crossover between the two is around 12 seconds, but then how do you tell which shot in the film is in which distribution? As far as I remember the film, the scene dissection was rather clumsy, and in particular the handling of the long takes.
Anyway, kernel density estimates are done by putting the distribution into a very small number of class intervals, so they could be creating something that is not really there on the finer scale of the actual distributions. Look at the shot length distribution of Foreign Correspondent:
That looks fairly smooth to me as shot length distributions go. Maybe the small deficiency in shots of length between seven and eight seconds freaked out Mike Baxter’s KDE calculation for some reason.
Finally, I repeat the knockdown counter example I gave in ‘The Metrics in Cinemetrics’ to Nick Redfern’s comparison of The Scarlet Empress (1935) and The Lights of New York (1928) with my comparison of the distributions for The Lights of New York and The New World (2005). Both these films have median shot lengths of 5.1 seconds, so on this ground alone you might think they have similar distributions, but when you look at their other features it turns out they are very different.
The crucial feature is that in The Lights of New York there are a substantial number of shots with length greater than 50 seconds, in fact 12 of them, represented by the tall bar at the right end of the graph, whereas there is only one for The New World. The reason for this substantial number of long takes in The Lights of New York is that is subject to the technical constraints on shooting and editing synchronized sound that I describe in Film Style and Technology: History and Analysis. Like many films made at the very beginning of the use of synchronized sound, The Lights of New York is a mixture of scenes done in long takes shot with direct sound, and action scenes that were shot wild and postsynchronized. The use of normal fast cutting in the latter was of course unconstrained by technical factors. Another wellknown film that has the same hybrid form is Hitchcock’s Blackmail. However, if we take into account the ASLs of the two films, with 9.9 seconds for The Lights of New York, and 6.8 seconds for The New World, this significant difference is highlighted. So just using the median alone as a statistic can also be very misleading.
Barry Salt, 2012
It appears you don't have Adobe Reader or PDF support in this web browser. Click here to download the PDF
It appears you don't have Adobe Reader or PDF support in this web browser. Click here to download the PDF
After reading Mike Baxter's first contribution to this discussion, I just cast a quick glance over the method of calculating kernel density estimates in a textbook, and failed to understand it properly. Hence my mistaken use of words about it in my Graphs and Numbers piece. However, I STILL think it is worrying that the use of this technique can give such a misleading result in at least one instance, as is the case for the shot length frequency distribution of Foreign Correspondent, where the use of KDEs apparently says that the shape of the distribution is 'lumpy', although this is definitely not the case when one looks at the actual distribution. As far as I am concerned, shot length frequency distributions present no difficulty in their interpretation, because they are a direct representation of how many shots there are of different lengths in the film. They are constructed by the simplest arithmetical means, unlike what is necessary for most highly advanced statistical methods. So there is a chance that a few more people could learn to understand them. And more importantly, they can be used to discuss what proportion of shots are greater than a given length, and how this relates to what is going on in the film itself, and other such questions. Shot lengths are only one part of a film's style, and like the other aspects of film style, only become really interesting when they are related to features of the content of the film.
It is possible to use new and complicated mathematical techniques without understanding how they work, by just entering the numbers that you have into a computer program, as Mike Baxter says, but there is a chance that you might make a mistake in your deductions that you don't recognize, because you don't understand what is going on mathematically in the computerized calculation.
Finally, a comment on the LOESS smoothing that Mike Baxter has introduced in his second piece. The results of the use of this technique with a span of 1/15 seems to have some sort of resemblance to the use of the repeated application of moving averages to locate the changes of cutting speed from scene to scene in movies that is demonstrated in my Cinemetrics Studies article Speeding Up and Slowing Down. Also, I again have a small worry about the location in time of the maxima and minima that the LOESS technique indicates. Some of these seem to be shifted by some minutes from where the original data in the Cinemetrics graphs suggest to me that they should be.
Barry Salt
One of the points I was trying to make in the paper was that concerns about mathematical and computational complexity that were valid in the 1970s are much less so in 2012, given the sophistication of stateoftheart statistical software (that also happens to be free). It’s much easier to create a histogram in R than EXCEL, for example, and you have more control.
KDEs and histograms are just different forms of density estimate, underpinned by the same fundamental ideas, and for many purposes it can be argued that KDEs are preferable. OK, KDEs are mathematically more complex, but the user doesn’t need to know this to use them effectively. Given a data set, mydata, use hist(mydata) for a histogram and plot(density(mydata)) for a KDE to get starting positions, then just play with the arguments in the hist and density commands – as discussed in the article – until you get something you are willing to present to others. If you really are concerned that KDEs misrepresent the data and histograms don’t, it’s easy enough to overlay the two plots to check.
I think, in fact, that the concern is misplaced and that Barry Salt muddies the waters by saying that KDEs can give misleading results, implying that this can happen because of a lack of mathematical understanding of the technique. You can argue about my use of a logtransform, and that I’ve undersmoothed the data for Foreign Correspondent, but the problem (if it is that) is with me, not the technique.
You can as easily run into EXACTLY the same problems with histograms, with the additional concern (easily enough demonstrated if you play around with the possibilities) that the appearance of a histogram is affected by, and it can possibly mislead, the choice of interval boundaries. KDEs are not so affected, which is one reason why some prefer them.
In my two contributions to this debate I’ve tried, to some extent, to separate issues about the use of the median and ASL from that of the lognormality of SL distributions. A main general point, though is that some forms of departure from lognormality, more manifest as departures from normality on a logarithmic scale and what I termed ‘lumpiness’, may militate against the use of either statistic as a summary descriptive measure.
Interpretation is subjective and Foreign Correspondent is marginal. One of the things the KDE tells you (you need to work to get this) is that about 33.5% of the SLs are between 1 and 3 seconds (inclusive), compared to the 26% or so you might expect from the lognormal. Whether this difference is large enough to mean anything I leave for others to judge. On the subject of histograms, and a point raised in Barry’s comment, is that if you wish to look at proportions that exceed some particular SL, for example, cumulative frequency diagrams (CFDs) seem a more natural tool to use than the histogram – I believe Nick Redfern has exploited this in a number of posts on his research blog.
I regret that Barry’s article Speeding Up and Slowing Down had previously escaped my attention. My use of the LOESS smoother applied to MAs is identical in spirit to his earlier use of repeated MAs. I’ve been looking more closely at the LOESS smoother applied to SL data since writing the article, and will comment more on this, if appropriate, when Yuri moves the ‘conversation’ on to other topics. Barry is right about the ‘shift’ in maxima and minima in the figures; the problem is not the smoother so much as the plotting positions selected in the plotting routine used and this should be easily corrected.
A diagrammatic representation of the plot should offer a succession of peaks and valleys, each peak a little higher that the last and each valley above the level of the one before it. The highest peak represents the climax, and from there the diagram slants sharply toward the bottom. (Epes Winthrop Sargent, Technique of the Photoplay (New York: The Moving Picture World, 1916), p. 45)
Running lines, handdrawn or conjured up, have been part of thinking about cinema for more than one hundred years. As described by a screen writer and screenwriting instructor in 1916, the diagrammatic landscape sketched in the epigraph above is similar to ones a number of other film analysts and filmmakers have come up with in order to visualize the internal dynamics of a movie. There, as here, the movie is represented as a time series and its dynamics, as a curve.
Since diagrams like this were not based on statistical data and showed no numerical values attached to this or that point on the curve they are not timeseries analyses in the strict sense, statistics warns us. This does not put old time series diagrams out of court, if only as pieces of historical evidence. To distinguish them from time series analyses proper, I will call nonnumeric time graphs of the kind timeseries models.
We too followed this kind of model when we launched the Cinemetrics site, peaks and valleys is what interested us in the first place; we even reversed the usual Yaxis of shot lengths upside down for our graphs to dovetail with the timehonored tradition to represent the climax as an apex, not as an abyss:
Figure 1: Cinemetricsgenerated timeseries diagrams for The Skin Game (above) and Easy Virtue measured in simple and advanced modes by Charles O’Brien, the trendline flexibility set at degree 6. Note the inverted shot length scale (Yaxis) for the bouts of faster cutting to be represented as peaks, not valleys.
We hardly suspected at the time that an alternative way of visual display could (and would) prove useful for analyzing editing: the nontimeseries statistics (I looked for, but could not find, a nonnegative way of describing it; would “static statistics” sound like stuttering?). What Barry Salt, Nick Redfern (and James Cutting and Mike Baxter more recently) have been doing with familiar film titles looked as fascinating and mysterious to us as when a magician comes up to you and produces a “boxplot” or a “violin plot” from your own pocket, as it were:
Figure 2: Comparative boxplots and violin plots, on a log scale, for Easy Virtue and The Skin Game as they appear on Mike Baxter’s (2012) Figure 6.4, see http://www.mikemetrics.com/#/cinemetricsdataanalysis/4569975605
I would not find it hard to explain a timeseries graph to the man in the street, were it in Chicago where I teach or in Latvia where Cinemetrics was born, for the metaphors I’d need to bring the message home derive from our common terrestrial environment, familiar to all: peaks and valleys; and even if some Latvians or Chicagoans (two flatlands, each beautiful in its own way) were unable to grasp the alpine metaphor, I could easily resort to the metaphor of waves. But when confronted with objects as shown on figure 2 the man in the street might call an ufologist for help.
The two graphs on Figure 1 are timeseries analyses of Easy Virtue (1928) and The Skin Game (1931) by Alfred Hitchcock; the paired graphs on Figure 2 are the same two films presented through comparative graphs: boxplots to the left, violin plots to the right. Which of these pictures, one might be tempted to ask, are closer portrayals of Hitchcock’s editing style in those (still British) years?
Wrong question: the main thing I learned from the first lap of our conversation on film statistics is that no statistical method alone gives us a privileged access to the truth. The median or the mean? Both are needed to make the picture bifocal. Descriptive statistics do not compete, they team up, and so do methods of visualizing them. That its data proved to be open to unpredicted uses is the best thing that could have happened to Cinemetrics.
What we owe one another are some comments. It goes without saying that unfamiliar shapes like the ones shown on Figure 2 deserve explaining; less obviously, so does the fact that wavy lines on Figure 1 look so familiar to all. Let me start with a quick glance at timeseries modeling from here to antiquity; I will then pause for us to hear what you might have to say.
Every film is a time series and so is every plot, as we happen to know from Aristotle. According to his Poetics (350 B.C.: Chapter 7), the plot is a whole that has a beginning, a middle, and an end. We do not know whether or not Aristotle was in the habit of drawing in the sand as Archimedes is said to have been, but if he was the plot diagram would likely look like this: /\
Why like /\ and not like  or like \_/? Such is the curve of all dramatic tension, explained nineteenthcentury drama theorist Gustav Freytag from whose 1876 book Die Technik des Dramas the following diagram is borrowed:
Figure 3: The dramatic structure diagram known as the Freytag pyramid. The vector runs from a to e, and the whole consists of 5 elements, not 3.
Importantly, Freytag’s graph is a dynamic affair. Introduction (a) and catastrophe (e) found at the base of Freytag’s pyramid are the only two terms with no explicit kinetic connotations; the other three relate to the movement of the curve: b) rising; c) high point; d) fall or Uturn (Umkehr).
In 1890 another theorist, Alfred Hennequin, used his mind’s eye to zoom in on the rising side of the dramatic pyramid to find out that it is a multiple composed of reducedsize copies of the whole:
Figure 4: The dramatic structure diagram from The Art of Playwriting by Alfred Hennequin (1890)
In this diagram, the rising slope is not a single line, but a ladder that consist of nine small rises, apices and falls. It is Hennequin’s idea of dramatic structure that Barry Salt’s Film Style and Technology names a precursor to narratives in film. Sargent’s verbal diagram which I used as my epigraph conforms to this scheme, as well as a metaphorinmotion which Victor Oscar Freeburg used in 1918 (Freeburg must have been on a train when he thought it up):
Let us symbolize the progression of dramatic attention by a loosely hung cable which ascends a hillside rhythmically over a row of posts. The angles, or apexes, of the cable would each represent a crisis, except the highest, which would represent the climactic point of the plot (Victor Oscar Freeburg, The Art of Photoplay Making (New York: The Macmillan company, 1918): 258).
What happens after the highest point is reached? Sometimes the structure of a drama has this form, and sometimes its form is like this, George Rockhill Craw’s 1911 essay The Technique of the Picture Play – Structure states using two microdiagrams inserted between words – the “sparkline” type of display reinvented a century later by Edward Tufte and imported into Cinemetrics by Gunars Civjans – to illustrate the this and this:
Figure 5: The dramatic structure diagrams by George Rockhill Craw in The Moving Picture World (January 28, 1911, Vol. 8 no 4, p. 178) to the left; Cinemetrics sparklines to the right
Both timeseries diagrams on Figure 5 are intuitive and informative. The Skin Game (figure 5 to the right) has three strong bursts of faster cutting; Easy Virtue, two weaker ones. Craw’s first sparkline (to the left) is closer to how the stage drama is structured; the second one is closer to film: a steep fall after a longer rise was a firm rule of ending a picture play in 1911 (as now).
A time series diagram can also tell us things about cultural idiosyncrasies of film style, for instance, whether or not this or that picture conforms to Aristotle’s triad or to instructions issued by his interpreters from Freytag to Craw. Take the following diagram found in The Art of Cinema and the Film Montage written by Soviet filmmaker Semen Timoshenko and published in Leningrad in 1926. Timoshenko’s system is too complex to address here in detail; it suffices to say for our purposes that his model of editing hinges on perceptible changes in editing rhythm at the moments of dramatic tension.
Timoshenko calls those moments “percussive spots” (udarnye mesta) which roughly corresponds to the Hollywood term punches (in place in the 1910s). The time series diagram of the prototypical sixreel movie according to Timoshenko looks like this:
Figure 6: The montage diagram of the prototypical (normalnogo) film as it appears in Semen Timoshenko’s The Art of Cinema and the Film Montage (Iskusstvo kino i montazh filma) (Leningrad: Akademia, 1926), p. 69. Explanations in the text.
Timoshenko’s whole consists of 2 lines and 8 posts. The bottom line is straight, and is notched and labeled by reels: Reel 1, 2 … 6. The curvy line above it is punctuated by 5 single and 3 double circles. The double circles are filmscale punches; the single ones, reelscale punches, the legend below explains. Vertical posts of different height project the punch spots upon the bottom line: in 5 cases out of 6 the reelscale punch happens either half way into the reel (reel 2) or briefly before the end of each reel (reels 1, 35).
If Freytag (1876, Figure 3) or Hennequin (1890, Figure 4) could cast a glance at Figure 6 (1926) they would have hard time recognizing in Timoshenko’s abstraction an evolutionary descendant of their own. The three peaks are there all right, but their placement is strange. The good old drama must open and close on a calmer note; this calm before the storm comes back (Umkehr) at the closure either as a happy ending or in the form of the eternal sleep. Not so here: two double punches out of three mark both the opening and the closure of the film; and the intensity of the middlepunch (delivered only 8 minutes before the end of the 1hour long movie) is lower, not higher than the third one! No plot can move like \_/, Gustav Freytag would exclaim.
Is this dramatic illiteracy at work? There may have been some of it too, but the main thing is that Timoshenko’s model squared well with views on these matters on the part of leftwing Soviet filmmakers in whose circles he moved. Forget Aristotle. A revolutionary movie must grip you from the start and electrify at the end. Closures, happy or tragic, are the thing of the past. The key task for the proper ending of the quintessential Soviet movie was not to provide a closure but to furnish an exit: from the past to the future, from fiction to reality, from the screen into the viewing hall. The examples are too wellknown to burden you with titles. The main thing is: that the highest peak on Timoshenko’s time series diagram is not followed by any slope, steep or gentle, is not an oddity or an error, but an outcome of an artistic doctrine.
Timeseries modeling is not pure speculation. What makes time models relevant to cinemetrical studies is their embeddedness with all three echelons of film production: preproduction, postproduction and production proper. Preproduction is mainly about scenarios and scripts. Some (not all) Soviet shooting scripts of the 1920s included projected footage (in meters) for each shot. Here is one from 1929:
Figure 7: Details from a shooting script by the Vasiliev brothers for The Sleeping Beauty (1930) before shooting (to the left) and after some of the shots (crossed out) have been filmed (to the right). The shooting script of this type is a table in 5 columns the 4th of which lists shot lengths in meters: 2, 2, 1 ½, 1, etc. Explanations in the text
As we can see from the two scraps from Vasilievs’ shooting script shown on Figure 7, what needs to be filmed and how, is laid out as a table with thousandplus rows, one per shot, and five columns specifying the position, place of action, closeness, length and contents of each shot. “I thoroughly approve of the dictum that if anything is worth putting into a table it is worth analysing statistically,” says Mike Baxter in his recently published "Picturing the pictures: Statistics and film" (Significance 2012, Volume 9 Issue 5, p.6). If the dictum applies here as well, one can select all 1661 time values from the 4th column of the script (available in full, while only around a third of the resultant film survives) and convert them into a timeseries graph. No one has done this job, so if someone volunteers he or she will be able to discover the timeseries model which the Vasiliev brothers envisaged for their yet unmade film.
To what extent pretiming defines what would take place on the set is a separate question. “Photoplays are put on,” one 1913 manual of screenwriting aphorized, “with a stopwatch in one hand and a yardstick in the other” (J. Berg Esenwein, Arthur Leeds, Writing the Photoplay (Springfield, Mass.: The Home Correspondence School, 1913, p. 147). Did many directors use a stopwatch to know when to say “cut”? Yasujiro Ozu did; we are unaware of many other examples. More typically, film directors think about the action rather than cutting on the set. Lev Kuleshov was a notable exception: his rehearsing and staging method foresaw that however actors moved their movements had to inscribe into a temporal grid. Here is a timeseries model (1935) which Kuleshov built to explain the method:
Figure 8: Diagram from Lev Kuleshov’s book The Practice of Film Direction (1935), English translation in: Kuleshov on Film (Berkeley, Los Angeles, London: University of California Press 1974, p.193). The curve represents action, the direct line, cuts between shots. Explanations in the text
The Xaxis on Figure 8 is marked by 6 notches; these are cuts between 5 shots of unequal length. The sinusoidal curve depicts the ups and downs in the tempo of action along the imaginary Yaxis. The montage rhythm of a film, says Kuleshov, emerges from the interplay between the beat of cutting and the flow of action. That each time a cut occurs on the AFaxis the curve, too, crosses it is a reminder: always think about cutting when directing actors on the set.
Postproduction is when timeseries modeling takes center stage, especially in documentary filmmaking where the temporal shape of a film is harder to script before or to control during the shooting. That I refuse to use scenarios and actors, the Soviet diehard nonfiction director Dziga Vertov declared in 1922 does not mean that I rob my films of structure. Appended to this manifesto is the following visual construct:
Figure 9: Diagram from Dziga Vertov’s manifesto “We” (KinoFot 1922, No 1, p. 12). The smaller drawing below is a skewed and angular version of the archlike chart above. Explanations in the text
The lines we see on the upper part of the drawing form seven arches of different sizes, two pairs of smaller arches nested beneath two larger ones, the latter ones beneath the kingsize arch. The overarching one (ak) represents the entire film which Vertov calls “work”. The small ones (ab, b…k), called “phrases,” represent sequences of shots. Each sequence has its peak (marked by dotted verticals), as does the work as a whole (the solid vertical).
This, of course, is an abstract compassdrawn timeseries model (its smallerscale version below tries to amend this: here peaks are peaks, and are poised forward); an interesting question to ask, however, is to what extent if at all Vertov’s practice of editing lived up to his own theoretical standards. Of the films Vertov made in 1922, KinoPravda 9: (8) ASL 5 turned out to be the closest approximation to the model:
Figure 10: Detail of the diagram from Dziga Vertov’s manifesto “We” (KinoFot 1922, No 1, p. 12) top left; Cinemetrics diagrams for KinoPravda 9: (8) ASL 5 at degrees 2 (top right), 4 (bottom left) and 8. Interpretations in the text.
I am not sure if the comparisons made on Figure 10 make much sense mathematically and statistically (or, for that matter, any sense at all), but what I did above was to see what happens with the trendline at different degrees of smoothing. I chose degree = 2, degree = 4 and degree = 8, for the mere reason that the 2, 4, 8 series has two common multiples with numbers 1, 2, 4 which define the selfsimilar pattern of Vertov’s timeseries model (Figure 10 upper left). At degree = 2 (upper right) the curve on the Cinemetrics diagram looks quite similar to the umbrella curve on Vertov’s ideal graph; degree = 4 gives us two humps, somewhat like Vertov’s two secondtier arches; degree = 8 gives us 4 hunches which is equivalent to the number of AB B…K “phrases” on Vertov’s graph, though, of course, they are not nearly as neat. All this may be smoothing tricks, of course; but then, isn’t using a pair of compasses to draw the ideal timeseries lines also a form of smoothing?
To round up: timeseries analysis inherits from timeseries modeling which is a mediumspecific (Figure 5), culturesensitive (Figure 6) and moviegenerative (Figure 7) procedure. This is the statement part of my question. The question proper will be about that newer, more sophisticated but also more esoteric part of our common toolkit, the thing called statistical distributions.
Cinemetrics is editinginlittle. What we do as we click on cuts to measure a film and what happens when Cinemetrics converts our series of clicks into a graphic semblance of a movie replays, in a nutshell, what the editor had been doing when our film was still on his or her editing table.
Figure 11: Shots related to the process of editing copied from Dziga Vertov’s KinoPravda 18 (1924) (B) and Man with a Movie Camera (1929) (A & C). Interpretation in the text.
The order of shots shown on Figure 11 is in sequence with three operations which film editing before video involved. First, shots of different lengths are stored tails up on the backlit shelves for the editor to see what’s on which. Operation B is to choose one, trim its length and join to another one against the backlit flatbed of the editing table (C); and so on till the film ribbon has grown into a reel.
The ABC sequence of editing operations is remotely like what they call the positional distribution in linguistics, which is about which elements of language fit (or do not fit) in this or that position within the current of speech. As I understand from reading Salt, Redfern and Baxter, statistical distribution entails the inverse: the analyst removes shots from the serial time, from the “montage syntagm” as it were. It is as if the ABC order on Figure 11 were read right to left to became CBA. I may be wrong about this, but isn’t lognormal distribution an intellectual equivalent of disassembling, unediting a film and putting shots of different lengths back on the shelves (Figure 11, shot A) or into a number of “bins” to see which bins are loaded denser? Both ABC and CBA are heuristically significant moves. We have seen what we can learn looking at how shots behave when spliced together; what do we find out when we put them back into bins?
“The preceding paragraph is indeed a rough way of describing how a distribution is created,” Barry Salt wrote to me a few days ago in response to a roughdraft version of this. It may be rough, I agree, and this is exactly why I leave this paragraph intact. If it has caused Barry’s response it works as a question. What makes statistics look warm and human in my eyes is that there is as little agreement on any issue as there is in humanities, and as many methods to use to approximate the truth. The more we care for films the more we worry and quarrel over them. What is better for them, parametric or nonparametric treatment? Does Foreign Correspondent look lumpy or not? And even the question of each film’s normality seems as opinionbased among statisticians as it would be among psychiatrists.
I wish I could come up with more pointed questions, Barry, but these would require more statistical competence than I have. I know I should have done my homework, Nick. On the other hand, if I had this conversation might risk turning into a shop talk, which is not its goal. Do not feel tied to my questions if they look too general to make much sense – feel free to ask your own.
It appears you don't have Adobe Reader or PDF support in this web browser. Click here to download the PDF
It appears you don't have Adobe Reader or PDF support in this web browser. Click here to download the PDF
It appears you don't have Adobe Reader or PDF support in this web browser. Click here to download the PDF
It appears you don't have Adobe Reader or PDF support in this web browser. Click here to download the PDF
Lines and Graphs
It would be nice if 'dramatic tension' could be quantified, so that we could get on with the important job of analysing its relation to all the visible features of movies. It IS something that is conceivably possible, but far away at present. In the meantime, we are just messing about with what little can be got from measuring the shot lengths in films. Nowadays, the only measurement of shot lengths that matters is in hours, minutes, seconds and frames. In the Cinemetrics system, the number of frames is reduced to a duration measured in tenths of a second, but this involves an irregular rounding up and down process from the original 24 frames per second at which motion pictures are still shot.
Mostly this does not matter, but sometimes it is better to return to using the original measurement of shot lengths in terms of their length in film frames. There are also some minor problems in measuring the actual length of a shot which depend on the type of transition used by the filmmakers between one shot and the next. The majority of shot transitions are straight cuts, which are unproblematic. But most films also include some dissolves from one shot to the next. Here, the obvious transition point is halfway down the length of the dissolve. That is, one counts frames from the frame in which the first faint part of the image of the incoming shot is visible to the frame in which the last faint part of the image of the outgoing shot is visible, and then halves this count to get the exact transition point. However, some people, including James Cutting's team, estimate this transition by looking for the frame in which the two superimposed images appear equally bright. This will probably be correct to within a frame or so if the incoming and outgoing shots have the same average overall brightness. But quite often one of the shots will be much darker than the other, and if this is so, the estimate of the midpoint of the transition can be out by several frames. Sometimes, the transition from one shot to the next is accomplished by a fadeout followed by a fadein. After the frame in which the last sign of the outgoing shot is visible at the end of the fadeout, there are a number of frames of black before the beginning of the fadein of the next shot. This number is usually fairly large, of the order of hundreds of frames, or seconds in time. For their purposes, the James Cutting team count this length of black as a shot, though filmmakers and others would not consider it as such. This can create a discrepancy between the Cutting team's results and those of other people working in a frameaccurate way. This is not particularly important in general, and it certainly wouldn't matter if we are comparing it with the approximate length measurements generated by the original Cinemetrics application, where lengths are measured on the fly as the film runs past. Another arguable point in measuring shot lengths is for films in which there are a number of split screen shots. That is, the film frame is divided up into a polyptych with two or more different images visible within it. These subsidiary images can change while the polyptych is visible on the screen, sometimes quite frequently. My practice is to count any such change as a transition to a new shot, but at least one other analyst just counts the whole length of the splitscreen sequence as one shot.
Using the Lines
So having measured the lengths of the shots in a film, they will be represented by a string of numbers in the first place. One can look along this string of numbers and spot any interesting regularities, but it sometimes is easier to see this from a graph in which the lengths of vertical lines are proportional to the size of the shot lengths. It is also easier to get the information onto a page of paper in this form. In modern times, the first visual and nonnumerical representation of shot lengths was on the screen of a computer running a nonlinear editing program, where the lengths of the shots are represented by horizontal lengths, but more recently we have the Cinemetrics graph, where the shot lengths are represented by the lengths of vertical lines, as in the more orthodox inverted form of the same.
As always, what is of most interest is the relation between the content of the film and its form. Thirty years ago I started investigating how the cutting rate within a film scene varied depending on the type of action within it. This is quite easy to show, and does not require any mathematics beyond counting and averaging. You can see it in all editions of my Film Style and Technology: History and Analysis. Much more recently, I made an attempt at locating the boundaries between scenes by taking repeated rolling averages of the shot lengths in a film (in 'Speeding Up and Slowing Down' in the articles section of the Cinemetrics website.) Unsurprisingly, this approach proves to be only capable of detecting the approximate position of the scene boundaries, and that only in the best cases.
The other thing one can study is the shot length frequency distribution for a film. The most basic way of doing this requires no more mathematics than simply the counting of the number of shots in the film whose lengths fall within a series of fixed intervals  say from zero to one second, greater than one second to two seconds, and so on  and then drawing a bar chart where the heights of the columns represent those numbers. When looking at the resulting graph, one is seeing the numbers of shots in each 'class interval' or 'bin' directly, without any mathematical transformation of the data. This procedure is laborious, but it can be automated nowadays using a computer spreadsheet. In the data analysis tools in Excel, there is a function called 'Histogram', which when supplied with a list of shot lengths and a list of the class intervals ('bins') to put them in, produces an array like this one for the film Casablanca (1942).
It so happens that a class interval of one second works quite well for films with an ASL in the region from about 5 seconds to about 12 seconds, which is where the vast majority of films made from the 'thirties into the 'fifties dwell. So for Casablanca I used a class interval of one second. 
(The 'More' column at the right end of the graph represents the total number of shots with lengths greater than 50 seconds.)
This graph is a bit jagged and lumpy, and we can smooth this out by using a class interval of width two seconds, as in the following graph.
For films with a really short ASL, of the kind that have emerged in recent decades, things are slightly different. Take Shoot 'em up (2007), with an ASL of 1.64 seconds.
This distribution has a smooth profile, but it is not very informative, with 1484 shots, almost half the total, in the first interval containing those shots between 0 and 1 second. If we change the time measurement from seconds to frames, and makes the class interval 8 frames (or a third of a second) wide, it looks like this:
As well as providing more precise information about how many shots there are with each length, the shape, though equally smooth, shows the dive from the mode at 16 frames to the origin, which is characteristic of film shot length frequency distributions. Now if we go for broke, and decrease the class interval to one frame, we get:
This graph has a much more jagged shape than the previous one, but it also shows what was not visible before, which is the way the distribution dives towards the origin. This can be made clearer with an enlarged view of the beginning of this graph from zero up to nine frames.
I have added the theoretical values derived from the Lognormal distribution that best fits the actual distribution of shot lengths for this film. Although the fit is not perfect in this region, you can see the way the actual values show the same sort of approach to the origin as the theoretical curve. The way the curve approaches the origin asymptotically is particularly characteristic of the Lognormal distribution, and it is gratifying that it appears in the actual experimental data as well. This effect also appears in the distributions for other films with an extremely short ASL, such as Derailed, but not in the distributions for films with longer ASLs, which have in general no shots shorter than eight frames.
If one wants to compare the shape of two distributions with closely similar ASLs, one can interleave them on the graph, as in this comparison of Shoot 'em up and Derailed (2002).
The resemblance is very close, which is not surprising since the median for both distributions is 1.04 seconds, though there is a small difference in their ASLs (1.59 seconds for Derailed, and 1.64 seconds for Shoot 'em up.)
To actually measure the difference between the two distributions, we can get the Pearson correlation coefficient, which is 0.992, and indicates the closeness of the two distributions.
Going Slower
Looking at the slow end of film cutting rates, this is what we get from graphing Sunset Blvd., which has an ASL of 15.5 seconds, with a one second class interval.
This has a pretty rugged outline, which is not too surprising, given the low number of shots in each interval. So let us try a class interval of two seconds.
That is a little bit better, but more importantly it is starting to get the characteristic shape we expect from film shot length distributions. So let's try an interval of 4 seconds.
I like that shape or profile; it looks like Derailed or Shoot 'em up in their 8 frame class interval incarnation, and also the two second interval graph of Casablanca.
Moving upwards, Panic in the Streets (1950) has an ASL of 24.3 seconds, and in this case its distribution shown with a one second class interval looks like this:
This is a lot more jagged a shape than that for Sunset Blvd., and an obvious idea is to see if the isolated peaks at 17 to 18 seconds, and 34 seconds have any significance. An examination of how the shots of these lengths occur in the film with respect to what is going on in them shows no relation to the lengths of the lines of dialogue being spoken in them, for instance. (Panic in the Streets includes lots of dialogue, and none of the panic in the streets promised by the title.) Nor can I see any relation to any other variable related to the film's content that occurs to me.
Even when the class intervals are widened to 4 seconds, the distribution does not smooth out that much, unlike Sunset Blvd.:
In other words, as we get towards, and past an ASL of 20 seconds, we are in a region where any close resemblance to any standard probability distribution is vanishing, as I have often said. To rub this point in, I recall the distribution for la Signora senza camelie (1953), which has an ASL of 59.4 seconds, and is shown here with a class interval of ten seconds:
Even with such a large interval, the usual characteristic shape is only vaguely there, and the distribution is quite jagged.
Looking For Significance
My search for a reason for the excessive number of shots with lengths of 17 to 18 seconds in Panic in the Streets failed, but one failure is not enough to stop looking for reasons for such peaks in shot length distributions. Perhaps in musicals there might be specially favoured lengths in the way that musical numbers are edited, related to the regular structure of the pieces of popular music being used on the sound track? Say cuts only at the beginning and end of each chorus of a ballad being sung or played, or every four bars, or something similar.
My first candidate is Good News (1947). The shot length distribution for this film with 2 second class intervals is:
The class of shots with lengths of 39 or 40 seconds sticks out, but when looking down the table of lengths for this film, and seeing where they occur in the films, shows that they nearly all appear separately, and mostly in dialogue scenes. (I find that the best way to study this matter is to use a table of the shot lengths written down in order in a spreadsheet in conjunction with a copy of the film in a nonlinear editor, with the length of the shots marked on the timeline.) And there is no sign of groups of shots of nearly equal length in other places down the list of lengths.
Another try with Singin' in the Rain (1952) failed in the same way, and indeed when simply playing a DVD of the film it is apparent that the editor was again not cutting the musical numbers in any rigid way at the end of each chorus, or the end of each phrase, or in even multiples of the bar length. However, a final try with Anchors Aweigh (1945) did just slightly better. (All the previous examples use my own data, but here I am using the data from the James Cutting team in the Cinemetrics database.)
The column representing shots of length between 14 and 15 seconds looks a bit too big, and indeed from shot 421 to shot 426 there occurs the following sequence of lengths:
14.1, 13.5, 27.3, 14.2, 13.5, 14.4.
But these shots cover a conversation in the studio cafeteria between Gene Kelly and Kathryn Grayson, not a song. However, after failing to find what I was looking for in about a score of musical numbers in all three films, I got lucky with just one song in Anchors Aweigh, which is the song "The Charm of You" sung by Frank Sinatra to Pamela Britton in a Mexicantype restaurant. This runs from shots 463 to 473. The shot lengths are, in sequence:
26.7, 6.3, 5.7, 6, 6.8, 6.6, 7.5, 6.7, 12.1, 8.6, 23.9.
The first shot covers the whole 32 bars of the first chorus of the song, then the subsequent shots go to different angles on the pair of lovers for each subsequent 8 bar line over two more choruses, and then the shots lengthen out to cover the last chorus with three shots, which depart from the 8 bar pattern. If you check with the Cutting data in the Cinemetrics database, you will find that he gives the first shot as 31.5 seconds. This is definitely wrong, and there seem to be a number of similar errors in his data for this film. The negative figure of 0.5 seconds for shot 396 is another of them. Shot 396 does not actually exist, so if you remove it from the list you will have something a bit closer to the actual lengths, but still having many shots with slightly wrong lengths. Although there is hardly any cutting on regular numbers of bars in these musicals, there is a visible tendency to cut about a second before the end of a chorus in musical numbers. It is possible that some regularities might emerge in this area in the way that they have in my examination of dialogue cutting in the article Reaction Time: How to edit movies in The New Review of Film and Television Studies (Vol. 9, No. 3 September 2011).
It is quite possible that if one examines musical numbers in recent decades, where the kind of pop music used is very different to that of the 'forties, and the cutting is much faster, then one might find a greater regularity in the cutting of such numbers. The observations about a scene in Wedding Crashers (2005) on page 438 of the paper Attention and the Evolution of Hollywood Film, published in Psychological Science (2010, 21:432) by James Cutting, Jordan DeLong, and Christine Nothelfer is a case in point.
The moral of my story is that you can do plenty, and see what is going on, by staying close to the data, without gussying it up with excessive manipulation.
Barry Salt, 2013
There appears to be a wealth of graphical methods that may be used to investigate the internal SL structure of a single film; some of those are detailed in Mike Baxter’s and Nick Redfern’s responses to the previous question in this conversation. The question I’d like to propose now relates to what seems to be a less charted area. All films are different as far as their SL structures; yet some are less different than others. Some may be similar because of the time and space of their production; some because they belong to the same genre; some because the same filmmaker made them; and some are similar because want to be different at any cost. I believe we all have a consensus on the stylistic relevance of family resemblances between films; the question is what might be the ways of revealing them. Possible solutions have been offered by Nick Redfern in his Chaplin Keystones analysis; by James Cutting and his collaborators in the course of their attempt to test the “fouract structure” hypothesis; by Mike Baxter (see Figures 3 and 4 from his article “Lines, damned lines and statistics” found under the tab “What do Lines Tell?” of the present conversation); for more than a year, Keith Brisson and I have been collaborating on solving this issue.
Let me begin with a quick masterview overlooking the Cinemetrics database; what follows then is a summary account of samplings and calculations Brisson and I have performed on parts of it: sets of data that pertain to Griffith’s films. Segments of our work have been exposed to Cinemetrics users across four Labs and in Brisson’s Discussion topic “Side by Side” where you will also find comments and doubts posted by Barry Salt; other segments remained unpublished until now. Gunars Civjans will then open the floor for your questions and suggestions.
Imagine the Cinemetrics database as a small ocean of a million or so timelength data varying between close to 0 and close to 600 seconds. It is highly unlikely that one would be able to find some structure common to all: first, not all of these data belong to films (nonfilms like TV shows, football matches or presidential addresses are met with among the Cinemetrics submissions); secondly, not all of the length measurements are those of SL (there are, among my own Chaplin submissions, films measured by the frequency of laughter in the viewing hall; some length are those of the screen presence of a star, a music tune, etc.) Thirdly, even the film films measured by SLs are too diverse to expect all of them to display some sort of salient pattern. Too many different people pursuing different research agendas and interested in different areas and periods of filmmaking have been submitting data to Cinemetrics within the last 7 years. This is what an imaginary “database film” might look like if all its lengths data were averaged and projected onto a conditional “film length” axis:
Figure
1: Curve obtained by Keith Brisson in June 2011 using his “partitions method”
(henceforth the “Brisson curve”) to process all the 7K+ submissions found at
the Cinemetrics Db by that date
On the other hand, there are smaller groups found in the pool of Cinemetrics data within which SLs cohere. Evidently the basic autonomous group of this kind is this or that individual film; hence the architecture of the Cinemetrics database whose front page looks like a list of records but is, in fact a database of databases:
Figure 2: Opening page (detail) of the
Cinemetrics database shown here sorted by title. The “Abraham Lincoln” record,
for instance, specifies that the film was made by D.W. Griffith in the US in 1930,
measures and datasubmitted by Charles O’Brien in 2012, its ASL being of 12.1
seconds, etc.
Indeed, if we click, say, on the “Abraham Lincoln” record what we access is a smaller, wellstructured and contentcoherent database of shots:
Figure 3: Database of 449 SL data extracted from the set of shots (minus credits and the end title) all of which are part of the record IDd “Abraham Lincoln.” Colorcoded are subsets of SL data pertaining to 4 types of shots
In other words, while on the ground level the Cinemetrics database looks perfectly chaotic, the picture changes to a more orderly and meaningful when it comes to smaller databases that hide under the record for each film. Let us call the latter a “high level” SL organization; the level Keith and I were curious about is a “middle level” of the database coherence: can coherence be detected between, not only within a number of highlevel databases called “submissions” in the Cinemetrics vernacular.
Indeed, when we arrange Cinemetrics records as we have on Figure 2, nothing that unifies them aside from the fact that their key metadata start with A. The question is what is going to happen if we sort them by metadata which are more relevant to editing, for instance, this way:
Figure 4: Opening page
(detail) of the Cinemetrics database shown here sorted by the director ID:
Griffith
There are grounds to believe some consistency can be found in Griffith’s editing choices across years; the question is what kind of.
Figure 5: Cinemetrics graphs for D. W.
Griffith’s A Romance of the Western Hills (1910) to the left and Abraham
Lincoln (1930) to the right. The polynomial smoothing lines set to
Degree=10
Take a film made by Griffith in 1910 and put its dynamic profile back to back with Abraham Lincoln made 20 years later, as on Figure 5. Is there something in common between the two, or maybe these are polynomial smoothing methods that tempt us into seeing a semblance between the two profiles?
Of course, this is a rhetorical question, and Figure 5 is a rhetorical comparison: too many things happened between 1910 and 1930 in the history of editing to reduce similarities or differences between the two graphs to Griffith’s individual handwriting; in addition, any theory based on two graphs is too datathin to draw a conclusion. Ideally, all 535 films that Griffith (co)directed should be put back to back, or, for instance, a subset of those which Griffith made at the Biograph studio between 1908 and 1913. This way we could determine if there is a family resemblance between the films Griffith made at a certain period, or of a certain genre, or perhaps between all of them.
This is the middlelevel of the Cinemetrics Db coherence that I was talking about earlier on. But here, of course, an operational problem interferes: what do we do to compare data flows of 500, 50 or even 5 films? One solution is to use multiple plots superimposed onto the same grid, as Nick Redfern did in his study of Chaplin at Keystone, or Mike Baxter did for 24 action films in his “Lines, damned lines and statistics.” Is there also a way of summarising middlelevel data using one line instead of 24?
In the spring of 2011 Keith Brisson and I embarked on a series of experiments whose goal was to find out exactly this: can data pertaining to a number of films be averaged and visualised as if they were one? Griffith looked like a good specimen for this study. Three telescopically embedded samples were selected, each placed into a separate Cinemetrics lab: A)132 Griffith's films from any time (19081931) found in the Cinemetrics database, one submission per title; B) Griffith’s 61 Biograph movies (190813) ; C) 19 Biograph movies in which crosscutting is used. I will say more about crosscutting later in this note; one thing to mention now is that crosscutting sequences tend to start on a slower pace and gather speed towards the end.
Cinemetrics labs come equpped with a semblance of a blackboard, or star map, depending on which visual metaphor we choose to refer to the blackfoil scatter plot below:
Figure 6: Scatter plot with gray dots
standing for the entire set of Cinemetrics submissions, and yellow dots for all
Griffith’s submissions on the Db. The Xaxis represents the span of the film
history in years, the Yaxis ASLs in the descending order: the slower, the
lower. The dashed line (added later in Word) marked the end boundary of the
Biograph period 19081913
Each yellow dot on Figure 6 represents an ASL figure for each of the 132 films by Griffith selected in this lab (if they fail to light up when you visit the lab online, change your browser to Mozilla Firefox). The cascade of sparks which becomes more scarce as films become longer (hence longer to make) tells us how steeply Griffith’s cutting rates accelerated till the beginning of the 1920s then falling down in the early 1930s (this following the general trend of early talkies, to be sure). Some dots overlap, and some important submission are still wanted (Orphans of the Storm, for instance), but in general, the Griffith constellation has a story to tell. There is a consistency in the fluctuation of speeds with which Griffith cuts. To what extent this story is Griffith’s own and what it shares (or does not) with the universal drift is not what concerns us here.
What the above plot does not tell us, however, is if the way Griffith cut within the duration of a film remained consistent from film to film and from year to year. Cinemetrics graphs allow us to ask this question about each separate film; what Keith and I were after was a supergraph, the graph of graphs which would enable us to detect at which point in the run of the film Griffith’s editing becomes typically faster and typically slower.
The first problem we needed to solve was how to compare various films of unequal length: no film is likely to be of the exactly same length as another, and Griffith’s movies vary in length from the split reel shorts he began with to threehourplus affairs like the 1916 Intolerance. The solution Keith offered was to split each film into a number of equallength partitions.
Figure 7: Shotstopartitions
conversion of a hypothetical 2minute movie; a demo slide Keith Brisson designed
for a presentation he, Arno Bosse and I gave at the Digital Humanities
conference at Stanford in 2011
In his topic posted on the Discussion Board here Keith Brisson explains his method in some detail. Let me give a general sense of how it works. On Figure 7 a 120seconds long film dummy consists of 8 shots of nonequal lengths. The program divides it into 6 equallength partition. For each of these partitions we can count the number of shots that occurred in each. If a shot had a fraction in the partition we add that fraction. For each partition, dividing the partition's length by its shot count yields the ASL for that partition. We can thus compare movies using data from this method by using the same N value for each film. For instance, if N=100, then the first partition of each represents the first 1/100th of each film, and we can calculate and compare corresponding ASLs. This data can be averaged across films. If we choose N=100, and look at the 50th partition of each film, we can calculate the average shot length at the midpoint of each. This average can then be averaged again.
Doing this for each of the Griffith titles available on Cinemetrics, we will end up with a supergraph that resembles a curve representing ASL at each point in the "average film," as in this supergraph “All Griffith:”
Figure 8: The Brisson curve obtained
for all 132 Cinemetricshosted films by Griffith listed in this Lab. Arrows added.
The curve fits, roughly, within the space between c. 3.2 seconds where the allGriffith average film starts, and is at its slowest (c. 6.7 seconds) after 10% of its duration; it starts regaining tempo sometime in the area of 30% and starts losing speed again after 60%; and ends pretty much in the middle, slightly above 5 seconds.
Figure 9: The Brisson curve obtained for 61
Biograph films by Griffith listed in this Lab. Arrows added.
The Biograph supergraph on Figure 9 displays a similar profile but with sharper features. It starts faster (ASL just above 2 seconds) that the AllGriffith graph on Figure 8, becomes slower than the latter at around 8% of its run (ASL just below 7”), and gains speed again (ASL between 3” and 4”) somewhere between 60 and 70% into the film.
Both of Griffith’s curves are more informative than the allsubmission curve shown on Figure 1 in which all that noise fits into the narrow slot between 3.4 and 4.3 seconds. Consider the growth and decrease in Griffith’s cutting speed which takes place after the “slow speed” point. This pattern conforms to what we know about Griffith’s habit to cut faster as the story tension grows. What comes more as a surprise to those familiar with Griffith’s films is a strange steep beak which shows on both of the Griffith graphs, a longer one with Biographs, and a shorter one with the allGriffith graph.
Two possible explanations have been offered to account for the beak effect, both by Barry Salt here. On the one hand the fast start could have been caused by a calculation error, a statistical artifact of a kind Salt found in a study by James Cutting and collaborators, see more here. On the other, Salt suggests, “the dip might be a real characteristic of the structure of the films themselves. This is interesting.”
If the latter assumption is correct, the faststart effect diagnosed by the Brisson curve may have something to do with instant openings characteristic of onereel narratives with their 15minute limit of screen time imposed on any story, be it a day or a lifelong. Screenwriting manuals of 1913 (the last year of the reign of onereelers in American film industry) would typically issue this warning: “A common mistake among amateur photoplaywrights is to take too long in ‘getting down to business’ – far too much time is wasted on preliminaries. … No matter what kind of story you are writing, go straight to the point from the opening – make the wheels of the plot actually commence to revolve in the first scene – plunge into your action, don’t wade timidly in inch by inch” (J. Berg Esenwein, Arthur Leeds, Writing the Photoplay (Springfield, Mass.: The Home Correspondence School, 1913), p. 1156; italics in the original).
Griffith’s favorite method of plunging into action was to insert a brief expository title before the first shot. Suppose the story starts with a young girl (or two) losing a parent. In a 150minute called Orphans of the Storm (1921) Griffith afforded a prologue showing a mother murdered, a baby put at the church’s door, being adopted, growing into a beautiful young girl – and only then the complication phase sets in. Not so in the teeth of the austere footage policy enforced at Biograph ten years earlier in Griffith’s career. All Griffith could afford in the way of a prologue in those years was a brief obituarystyle title like the ones that open those two shots from 1910 and 1912:
Figure 10: Expository intertitles that open
Griffith’s As It is in Life (1910) to the left and An Unseen Enemy
(1912)
The timing rule for intertitles being one foot per word (which equals one second at 16 fps), the intertitle on the left would last for 3 seconds, and to the right, say, 8. If we compare the threesecond opening of As It Is in Life: (7) ASL 16.5 whose intertitlesonly ASL = 4.8” we will think this indeed may be something that caused the faststart beak on the Brisson curve; on the other hand, if we do a similar comparison between the 8 seconds taken by the more tearful expository title shown on the right on Figure 10 with average data for Unseen Enemy, An: (7) ASL 7.1 (intertitlesonly ASL = 6.3”) the result will point in an opposite direction.
The next step Keith and I took was what Barry Salt has termed “experimental film history” in the final paragraph of his recent essay. Cinemetrics measurements done in the advanced mode allow sorting out data by shot categories; what happens if we take a set of advancedmeasured Biographs by Griffith, and partitionprocess them first with, and then without intertitles taken into account? Will the beak change or remain the same?
Figure 11: Two Brisson
curves obtained for a sample of 61 Biograph films by Griffith listed in Notes
to this lab. Explanation in the text.
Arrows added.
These two curves were obtained from the same set of 61 Biograph films partitionprocessed with (blue) and without (red) intertitle data. Most of the time the blue curve unfolds in the fastercut area of the graph which means (to no surprise) that on average titles were shorter than liveaction shots. And the long beak became shorter by some 4.7 seconds.
Another subset of Biographs we used in order to test the partitions method were Griffith’s films in which crosscutting is used. Griffith’s main area of experimentation throughout his Biograph years, crosscutting is an ab(c)ab(c)… technique of shot sequencing is which a, b, c, etc. are mutually related series of actions taking place each in a different location. While crosscutting can be (and was) used in any genre, its native and privileged genre is the suspense melodrama in which danger, aggression, and the subsequent rescue operation are sequenced in the above order. When it comes to the rescue stage of the plot, the frequency of cuts shifting between spaces grows. Would the partitions curve give us a sense of the early evolution of Griffith’s crosscutting?
Before I report our results let me briefly dwell on some graphic ideas Keith and I have been toying with as we thought of best ways of visualizing crosscutting. Two alternative methods are habitually used to represent shot lengths in a graph.
Figure 12: Cinemetrics graph for the
“takeoff” sequence from Vsevolod Pudovkin’s Pobeda [Victory; American
release title: Mother and Sons], 1938. Click to access: Pobeda (start sequence): (7) ASL 2 Click to see the wholemovie graph: Pobeda: (7) ASL 5.8
Method one is to use a bar chart as we do in Cinemetrics: brighter bars looking upside down from the Xaxis stand for shot lengths. The longer the shot, the longer the bar; the width of bars remains the same within the space of one graph.
Figure 12 stands for 25 shots from the film’s second reel: the bravura takeoff of a plane. It so happens that a detailed cuttingchart for this sequence survives; designed by Pudovkin in 1938, the chart served him as a visual aid to coordinate shot lengths with the length of four elements that constitute the soundtrack.
Figure 13: Pudovkin’s preparatory graph (fragment) for
the editing of the “takeoff” sequence from Vsevolod Pudovkin’s Pobeda,
1938. Click to see a fuller version of the chart paste in a comment box under: Pobeda: (7) ASL 5.8. The chart was reproduced in Lev
Kuleshov’s manual Osnovy kinorezhissury [Fundamentals of film
directing], Moscow: VGIK, 1941. Arrows added in Word
Distinct from the Cinemetrics graph on Figure 12, here the Xaxis, not the Yaxis serves to visually render the length of each shot. In Pudovkin’s chart, each shot is a little column with a shot number inscribed in its capital and shot length in seconds in its base. All columns are of the same height, but the width of each depends on the figure in the base.
Clearly, each of the two systems has its advantages and drawbacks. A trendline, for instance, which helps to smooth data in Cinemetrics, would not work in a Pudovkintype graph; on the other hand, a trendline was of no use for Pudovkin who used the chart in 1938 as a blueprint for, not as a fingerprint of editing. What his system was geared for was to coordinate the optical track with the sound track.
There is no reason why the two systems cannot be integrated into one, Keith and I reasoned; in our experimenting with Griffith’s crosscutting at Biograph we did exactly this. In the series of diagrams that follow, the length of a shot in a film is indicated both by its height along the Yaxis and its width along the Xaxis. The plan was to use what might be called a biaxial plotting to scrutinize a small sample from 19081909, Griffith’s earliest years of crosscutting. We took 4 films, plotted them separately, and then used the Brissoncurve process on all the four.
Figure 14: Biaxial SL plot
for Griffith’s 1909 The Prussian Spy: (6) ASL 26; the year 1908 above
the graph is given in error
The figure above plots the editing of one of Griffith’s first attempts at crosscutting, entitled The Prussian Spy (unjustly misspelt as “Russian spy” in many American filmographies). The plateau which ends with a precipice soon after the middle of the film is the first shot, 165.5 long; a rocky ravine in the middle is formed of 9 shots all under 16 seconds; a lower/shorter plateau (39.6”) closes the film.
Figure 14 tells a lot to the historian of film, for its uneven formation is caused by the clash between two filmmaking systems, verily geological in age and scale. A more archaic (yet by no means “retarded”) system privileged singlespace action articulated by entrances and exits; the other, which Griffith increasingly relied on through his Biograph years, preferred to spread action across shots; crosscutting, of course, is a particular case of the latter. In The Prussian Spy Griffith made use of both; how the choice of either affects shot lengths is evident from the plot above.
Tom Gunning (who prefers the term “parallel editing” to “crosscutting”) gives a helpful analytical description of the film’s action:
The Prussian Spy … is a good example of this uneven development in articulation. The first shot of the film lasts an agonizing ninetynine feet (16 mm), more than half the length of the entire film. An enormous amount of information is contained in this shot, with characters entering, exiting and reentering. A spy (Owen Moore) is concealed in a closet by his lover (Marion Leonard); the concealment is discovered by a French officer (Harry Solter). To torment the woman and destroy his enemy, the officer tacks a target onto the closet door, claiming he must practice his aim with his pistol. All of this takes place in a single shot. … However, as soon as a parallel editing schema can be introduced, the film alters radically. The second half of the film fragments into ten shots. The woman has sent her maid (Florence Lawrence) to open a trap door above the closet and help the spy escape. The sequence alternates dramatically between the trap door and the parlor containing the closet. Griffith repeatedly interrupts action with a cut on gesture (the French officer aiming his pistol at the targeted door), switching to the progress of the maid as she pries the trap door open and attempts to get the spy out of danger (GUNNING 1991, p. 198).
The only detail one can add to this description looking at Figure 14 is a pause on the last shot; indeed, a few extra seconds given to relaxation (arrest; embrace or kiss; comic relief) was seen as the right note to end the film; in infrequent cases when Griffith’s victim dies, extra seconds are reserved for us to grieve:
Figure 15: Biaxial SL plot for Griffith’s 1908 Behind the Scenes: (2) ASL 23. Recolored in Word; see the text for the legend
This story is a motherdancingdaughterdying melodrama (click here for a detailed synopsis). The crosscutting is between home and theater. The redcolored areas of the plotline on Figure 15 mark the shots set in the theater to which the mother is summoned ostensibly to jump in for a noshow dancer; the uncolored areas stand for the home with the sick daughter. Two high cliffs that flank the crosscutting sequence in the middle are shots set at home: the length of the first shot is explained by the amount of action needed to set the scene (enter a messenger; exit with the mother); the length of the final one by dramatic needs (the mother learns the tragic and collapses).
Figure 16: Biaxial SL plot
for Griffith’s 1909 Drive for a Life, The: (7) ASL 15.4
The Drive for a Life (Figure 16) and At the Altar (figure 17) are stories of averted bridal revenge both. Distinct from Behind the Scenes or The Prussian Spy, it takes more than one shot for a former mistress or a rejected suitor to set up the trap before the films precipitate into crosscutting. Hence a less regular rockscape above and below:
Figure 17: Biaxial SL plot
for Griffith’s 1909 At the Altar: (7) ASL 22.5
What we did next was to process the biaxial graphs (Figures 1317) to obtain a biaxial Brisson curve for all the four:
Figure 18: Biaxial Brisson curve
obtained for four Griffith’s crosscutting films of 19089
The main information we gain from the aggregate plot on Figure 18 is that crosscutting in early Biograph onereelers has a tendency to accelerate after half into the film, reach its peak with 10% from the end and slow down slightly at the end of the tail. But there is also some information that we lose. If we compare the plotline on Figure 18 with any of individual biaxial plots (Figures 1417) what we will find lacking is a sense of rhythm. Take a look at Figures 15 and 16 for instance: the longer rocks look so wellspaced by series of shorter ones that one is tempted to see behind the graphs the hand of a skillful landscaper. Clearly, this is not the case on Figure 18: apparently, four times rhythm equals ruin.
What the above account is about are attempts to use a single trendline to summarize the dynamic curves of multiple films. There are a number of questions attempts like this raise. One of these is to what extent comparisons between films of different length can be seen as a legitimate procedure. Longer films typically contain more shots; shorter ones, fewer. It seems intuitively reasonable to compare four onereelers from two consecutive years 1908 and 1909 or, say, between ca. threehour long Griffiths from 1915 and 1916; is it as kosher to try to summarize data dynamics of Griffith films as distant in time and different in length as the 10minute 1909 Lonely Villa, The: (7) (ASL 12) consisting of 54 shots and the 144minute 1920 Way Down East: (7) (ASL 4.9) consisting of 1767 shots? Even if their sparkline graphs look as similar as above?
Another question might be about similarity/dissimilarity assessment. Film scholars and specialists in visualization like Lev Manovich (see his recent experiment on visualizing Vertov’s films) are used to base their judgment on visual shapes, be these of moving figures seen on the screen or more abstract filmrelated statistical graphs. Are there ways of calculating a “coefficient” of family resemblance between biaxial plots, for instance, on Figures 16 and 17?
As earlier, these questions are shots in the dark, meant mainly to steer this conversation, not to solicit direct answers of recommendations.
The biaxial graph adds nothing to the basic data, which is the succession of shot lengths. Hence it does not make possible any special new way of getting new measures or higher order generalisations from the data that was not already available. The biaxial graph has no advantage over the existing ways of representing the data, which are the Cinemetrics graph or the timeline type graph.
It appears you don't have Adobe Reader or PDF support in this web browser. Click here to download the PDF
It appears you don't have Adobe Reader or PDF support in this web browser. Click here to download the PDF
Actually the loess smoothing graph on page 10 suggests a three act structure more than a four act structure. There are actually only two internal maxima down the length of the graph, at around partitions 38 and 75, rather than the three maxima needed to indicate a four act structure. You can't pick and choose which turning points indicate the locations of the acts. A four act structure is also ruled out by postulating that the act divisions are indicated by the minima.
It appears you don't have Adobe Reader or PDF support in this web browser. Click here to download the PDF
It appears you don't have Adobe Reader or PDF support in this web browser. Click here to download the PDF
It appears you don't have Adobe Reader or PDF support in this web browser. Click here to download the PDF
It appears you don't have Adobe Reader or PDF support in this web browser. Click here to download the PDF
It appears you don't have Adobe Reader or PDF support in this web browser. Click here to download the PDF
It appears you don't have Adobe Reader or PDF support in this web browser. Click here to download the PDF
This comment is to Part 1 "On pace and pulse" of the above "Notes".
Some of the literature I’ve seen attempts to measure ‘visual activity’ within a shot or what might be called ‘auditory activity’ and combines this with SL data. Adam’s et al. (various papers) combine such data into what they call a ‘tempo’ function and use it, among other things, to try and identify act boundaries in threeact films. The idea as I understand it is that boundaries are assumed to be associated with dramatic climaxes and the tempo function provides additional information beyond merely changes in cutting rates. Can this be considered as an attempt to quantify ‘dramatic tension’, conceived differently from the way you’ve treated it?
It appears you don't have Adobe Reader or PDF support in this web browser. Click here to download the PDF
It appears you don't have Adobe Reader or PDF support in this web browser. Click here to download the PDF
Author: Gunars Civjans Date: 20120719
Every tab on this page has it's own comments section on the bottom of the page.
Post comments relevant to the article above.
Author: Rosie Date: 20140721
Hello! I came across your page as I recently started delving into film & stats myself, having written a blog post on the stats I gathered while shotlogging a nobudget feature earlier this year. I see you are more interested in the shot details themselves, but was wondering if any of you have delved a little deeper into the factors that affect shot length/take number etc? I plan to do a bit more on this myself but would love your thoughts!
http://fightingbadgers.wordpress.com/2014/07/14/thescienceofshotlogging/