Which of the following examples best illustrates the law of diminishing returns?

One of the most important advances in the recent literature has been the introduction of various kinds of heterogeneity and nonlinearity in the effect of aid. The relevant variation could occur across countries, across time, and across different categories of aid. One reason that heterogeneity receives so much attention is that it often has direct and concrete implications for the policies of donors. There is a great deal at stake here, which makes some of the current treatments of heterogeneity look disconcertingly simplistic.

One idea is that aid involves diminishing returns: as aid to a given country increases, it may become harder to use it effectively. To capture this possibility, growth is often modeled as a quadratic function of aid relative to GDP.67 To the extent that this effect is present, estimating a simpler linear relationship could be misleading about the benefits of aid. Yet we should be careful before placing too much weight on this empirical finding. An estimated quadratic will often be sensitive to outlying observations, and even where the quadratic term is statistically robust, diminishing returns are not the only possible explanation.68 The overinterpretation of results becomes especially dangerous when the turning point of the estimated quadratic function is used to calculate a limit for aid intensity, beyond which further aid is said to become ineffective. Notwithstanding many other problems, such exercises rarely calculate a confidence interval for the turning point. Since the turning point is typically based on a ratio of parameter estimates, it will be hard to identify precisely in the available data.

One widely discussed claim, initially associated with Burnside and Dollar (2000), is that aid is most effective in certain environments. The economics of this claim, and its implications, are covered in more detail in the main text. The statistical evidence is not overwhelming. The best-known findings are based on pooled cross-section time-series models, as in Burnside and Dollar (2000) and Collier and Dollar (2001, 2002). Sometimes the aid term is insignificant, but the effect of an interaction term between aid and policy can be estimated more precisely. These findings do not appear especially robust, however. Dollar and Levin (2006) provide references to many studies that question the Burnside and Dollar analysis. The cross-section study by Kourtellos, Tan, and Zhang (2007) finds considerable instability in regression tree models, which suggests that threshold effects and interactions—whether based on good policy or other variables—are sensitive to modeling assumptions.

Much of the literature has focused on heterogeneity in terms of recipient characteristics. But as when discussing the IV approaches used in the literature, the marginal effects of aid could also vary with the nature, intentions, and policies of the donor. This could give rise to heterogeneity both over countries and over time, as Bobba and Powell (2007), Headey (2005), and Minoiu and Reddy (2007) have recently argued. For example, with the end of the Cold War, and the weakening of strategic motivations for aid, the marginal effect of aid in the 1990s might be stronger than in previous decades. Across countries, perhaps bilateral aid from “enlightened” donors has a larger marginal effect than aid given for strategic reasons; for similar reasons, aid from multilateral donors may have greater benefits than bilateral aid. But these are subtle effects to be readily discernible in the cross-country data. When bilateral aid from relatively small donors (in absolute terms) is found to be a significant growth determinant, this raises more questions than answers. It particular, it seems likely to point not toward the importance of enlightened aid, but toward heterogeneity in aid assignment rules. Donors appear to differ substantially in the extent to which their aid responds to particular recipient characteristics, as Headey (2005) notes. In that case, in growth regressions, disaggregation of aid by donor may end up capturing the effects of nonrandom assignment rather than the intended causal effect. Even panel data studies are not immune to this critique, given that time-varying confounders could influence both aid allocation and growth.

Another approach is to disaggregate aid receipts into different categories, perhaps with different effects on growth. If aid is only partially fungible, then the effect of aid will vary with its type, and Bhagwati (1972) and Papanek (1972) both warned that aggregating different types of aid can be seriously misleading. Clemens et al. (2004) have developed an operational approach, which classifies aid flows into categories expected to have a fast-acting impact on growth, and others which may be more long term or neutral in their effects. Disaggregation is justified if different types of aid work over different horizons, or vary in their effects on productivity. This makes obvious sense from an economic point of view, and helps to provide a more complete picture of dynamic responses. The main drawback is the limitations of the underlying data needed for the classification. Headey (2005) recommends a simpler approach, which is to subtract humanitarian assistance from recorded ODA flows. The disaggregation of aid flows is likely to be an important direction for future work, across the full range of cross-country research on aid effectiveness.

View chapterPurchase book

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9780444529442000057

Theoretical underpinnings

Constantinos Ikonomou, in Funding the Greek Crisis, 2018

2.1.5.3 Implications on growth: Is there a substitution relation between technology and integration?

A firm’s product exhibits diminishing returns, i.e., the rate of output growth starts reducing after some point, as production factors rise. This trend is illustrated in the diminishing slope of the production curve in Fig. 2.4, which differentiates between a case where a firm manages to benefit from integration (Qintegr) and one that it does not (Qnon-integr). Similar increases in labor, from L0 to L1, offer higher output increases. A firm can produce more at an enlarged, integrated common market. Without integration benefits (also resulting from reaching a stage of market saturation), production can increase through successful technological investments. Producing a new product however contains risks and is less safe than selling existing products in new markets. Thus, integration may provisionally substitute the need for technological advancements (so long as output rises), even for firms that may require technological inventions to address large domestic and saturated markets.

Figure 2.4. Production curves for an integrated and a non-integrated economy.

The interrelation between capital and labor is illustrated in Fig. 2.5. Due to the integration process, the rate of substitution of the production factors capital (K) and labor (L), known as marginal technical rate of substitution, may expand (rightward), since more capital and more labor may be employed. Again, in a hypothetically closed economy, such a result is produced only by technical advancements.

Figure 2.5. Marginal technical rate of substitution for an integrated and a non-integrated economy.

View chapterPurchase book

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9780128145661000021

Operationally Relevant Preprocessing

Colleen McCue, in Data Mining and Predictive Analysis (Second Edition), 2015

6.2.1 Time

There are many ways to begin exploring data for recoding; however, there are some standard techniques that can be used to start the process. Preliminary steps include parsing the data by various temporal measures (e.g., time of day, date, day of week, month, season). For ease of use, time of day can be divided into time blocks or shifts. This is particularly useful when considering deployment issues, as it is much easier to staff from midnight to 0800 h than 0258 h ± 53 min. Four- to eight-hour time blocks work well for deployment analysis. Personnel generally do not work 4-h shifts, but using a 4-h level of analysis does afford some flexibility, as 8- or 12-h shifts can be overlapped to provide additional coverage for time periods associated with greater anticipated workload. Shorter than 4-h time blocks becomes cumbersome, as the number of time blocks within the day increases and it is unlikely that very brief time blocks will have any value from a deployment standpoint.

One exception to this is periods of time that are associated with specific incidents or anticipated events. For example, juvenile delinquency may spike for the relatively short period between school dismissal and when parents return home from work. In this case, establishing a time-limited deployment strategy around this relatively short yet high-risk time period makes sense. Similarly, as outlined in the example in Chapter 5, it is not unusual to observe transient increases in aggravated assaults associated with the closing of bars and nightclubs, followed by an increase in armed robberies. In this situation, these relatively transient spikes in crime can be related to the movement of a common victim population – bar patrons. Aggravated assaults, brawls, and tussling frequently are associated with the generalized mayhem of bar closings in areas densely populated with nightclubs. As these individuals make their way back to their vehicles, they make good targets for street robberies given the low lighting associated with the time of night and the increased likelihood that the victims’ judgment has been impaired from a night of drinking. Therefore, an effective response to these related patterns could include relatively brief, targeted deployment to the specific areas in question. The nightclub area would be addressed first, with an established police presence in anticipation of bar closings and the associated crowd control issues. These same resources could then be flexed to parking lots, side streets, and other areas associated with street robberies of these same patrons. This type of fluid deployment strategy can serve as a functional force multiplier because two seemingly different crime patterns are linked and addressed with the same resources. Because a common victim population is identified, the same personnel could be used to address two, relatively brief, time-limited challenges that appear to be very different at first glance (aggravated assaults and street robberies). Strategies like these require significant domain expertise and an excellent working relationship with operational personnel to validate the interpretation of the results and associated approach. The use of creative analytical strategies and fluid deployment can optimize public safety and security resource allocation.

Time blocks longer than 8 h often yield diminishing returns, as important fluctuations in activity are diminished with the increased amount of data, which can be referred to as regression toward the mean.2 Similarly, it does not make much sense to establish time blocks, even if they are the appropriate length, that do not match the existing or desired times for shift change. For example, in the development of a strategy to reduce random gunfire on New Year’s Eve, we found that the majority of the random gunfire occurred during a 4-h period that spanned from 10:00 P.M. on New Year’s Eve to 2:00 A.M. on New Year’s Day. While it might be attractive from a cost standpoint to craft a 4-h initiative to address this issue, it is not good personnel management to ask the staff assigned to the initiative to come in and work for only 4 h. In that situation, it made sense to expand the time block somewhat to make it more attractive to the folks working that night. If there is not some pressing need to change existing times, it works best if the data are analyzed and the models constructed to reflect existing shift change times.

This scheduling issue can be managed at several points along the analytical process. During data entry and recoding, the analyst should consider what particular time blocks would make sense from a scheduling standpoint. Does the department use 8-h shifts, 12-h shifts, or is there some opportunity for overlap during particularly busy periods throughout the day? The answer to this question will dictate to a certain degree what level of data aggregation more closely reflects the staffing preferences, and therefore be the most easy to interpret and use. This is not to suggest that everything should remain the same because “that is the way it always has been done.” This type of thinking can really squander resources, particularly personnel resources. Rather, working within or relatively close to the realistic, real-world parameters significantly increases the value of a model and the likelihood that it will be used.

Recoding specific dates into days of the week is a relatively standard practice. It is important to remember, however, that time of day can be important in the analysis of daily trends. For example, it was puzzling to discover a large number of street robberies on Sundays until the specific times were examined. This analysis almost always revealed that these robberies occurred during the early morning hours and actually reflected a continuation of activity from Saturday night. Seasonal variations also can be important, particularly if they are related to the migratory patterns of victim populations (e.g., tourists). Other temporal recoding to consider may include school hours and holidays, as the unsupervised time between school dismissal and when parents return home from work can be associated with considerable mischief. Curfew violations and truancy also are associated with unique time periods that might have value if recoded appropriately.

Recoding should be considered an iterative process. As patterns and trends are revealed, different approaches to recoding or emphasis will emerge. For example, it is not unusual to find increases in criminal activity associated with payday. Therefore, recoding the data to reflect paydays could add value to modeling efforts and related operations. Other events including concerts and sporting events also may be related to public safety challenges or issues. Revealing those relationships and creating derived variables that document these events could result in the creation of more accurate or reliable models that will support better operational decisions. The ultimate question to be answered, however, will determine what time parameters are most appropriate.

In one particularly clever example, a local crime analyst was considering a series of bank robberies. This particular series has been notionally illustrated in Figure 6.1. Analysis of the timeline revealed no readily identifiable pattern, which baffled the team. Days of the week, holidays, and even special events were considered but nothing emerged until they included the amount of money taken during the robberies in the analysis (Table 6.1). When “cash flow” was considered the pattern finally emerged. Larger amounts were associated with a longer period between incidents, while smaller sums were associated with a shorter time period between incidents. Subsequent interview of the suspect after his apprehension confirmed that these bank robberies represented his sole source of income and that he needed to maintain a certain household cash flow to meet his financial obligations. When he was able to obtain a larger amount of money he could delay the next robbery. Conversely, smaller amounts stolen were associated with a need to rob another bank more quickly.

Figure 6.1. Timeline illustrating a series of armed bank robberies.

Table 6.1. Notional Bank Robbery Data (Incident Date, Amount Taken, and Days to Next Incident)

DateBankAmount ($)Days to Next Incident09 Apr.Citizen’s National Bank10k3009 MayPeople’s Bank (south)5k817 MayPeople’s Bank (north)10k3117 Jun.First National Bank2k320 Jun.State Credit Union (north)10k3020 Jul.Banker’s Bank & Trust (north)5k727 Jul.First Federal (north)10k2925 Aug.Bailey Savings & Loan (north)2kArrested

Similarly, segmentation of the medical fraud “timeline” into the time before a bill is submitted for payment, the period of time between submission of the claim and payment, and after payment has been issued (the “pay and chase” model), was used to better understand the fraud lifecycle and also guide relevant approaches to prevention and thwarting.3

View chapterPurchase book

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9780128002292000067

Economic Growth and the Environment: A Review of Theory and Empirics

William A. Brock, M. Scott Taylor, in Handbook of Economic Growth, 2005

5.2 Empirical implications

The Kindergarten model relies heavily on the assumed role of technological progress in staving off diminishing returns to both capital formation and abatement. It is impossible to know a priori whether technological progress can indeed be so successful and hence it is important to distinguish between two types of predictions before proceeding. The first class of predictions are those regarding behavior at or near the balanced growth path. This set has received little attention in the empirical literature on the environment and growth, although balanced growth path predictions and their testing are at the core of empirical research in growth theory proper [see the review by Durlauf and Quah (1999)]. The second set of predictions concern the transition from inactive to active abatement and these are related to the empirical work on the Environmental Kuznets Curve. [See Grossman and Krueger (1993, 1995) and the review by Barbier (1997).]

View chapterPurchase book

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/S1574068405010282

Extinction, Causes of

Richard B. Primack, Rachel A. Morrison, in Encyclopedia of Biodiversity (Second Edition), 2013

Overfishing

In the North Atlantic, one species after another has been overfished to the point of diminishing returns. The Atlantic bluefin tuna, for example, has experienced a 97% population decline since 1960. Similar grim scenarios can be recounted for other prized large fish, such as the swordfish (Xiphias gladius). One of the most dramatic cases of overexploitation in recent years is related to the booming demand for shark meat and shark fins, caused in large part by the popularity of shark fin soup in Asian restaurants. Shark fishing has become a lucrative alternative to targeting severely depleted commercial fish populations. But many shark populations are now declining dramatically as well, because most species have a relatively slow reproductive cycle. Heavily fished shark populations in the Atlantic, for instance, have declined by 40–99% over the last 20 years, and some species there and elsewhere may soon go extinct.

Another striking example is the enormous increase in demand for seahorses (Hippocampus spp.) in China, which is tied to the nation's economic development. The Chinese use dried seahorses in their traditional medicine because it resembles a dragon and is believed to have a variety of healing powers. Approximately 45 t of seahorses are consumed in China per year – roughly 16 million animals. Seahorse populations throughout the world are being decimated to supply this ever-increasing demand, and international trade is now carefully regulated.

View chapterPurchase book

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9780123847195000502

Learning Curve, The

F.E. Ritter, L.J. Schooler, in International Encyclopedia of the Social & Behavioral Sciences, 2001

1 Mathematical Definitions

The shape of the curve is negatively accelerated—further practice improves performance, but with diminishing returns. Power laws and exponentials are both functions that provide this shape. Mathematical definitions are given in Table 1. The exact quality of the fit depends on innumerable details of averaging, the precise function used, and the scale. For example, the full power law formula is the most precise, but it has additional terms that are difficult to compute, and the asymptote is usually only visible when there are over 1,000 practice trials (Newell and Rosenbloom 1981). The typical power law formula is simpler, but leaves out previous practice. When using this formula, the coefficients of the power law for a set of data can be easily computed by taking the log of the trial number and log of task time and computing a linear regression. That is, fitting a regression in log–log space.

Table 1. Functions that fit practice data

Time=MinTime+B·(N+E)−β [Full power law formula]Time=B·N−β [Simple power law formula]Time=B·e−αN [Simple exponential formula]B is the range of learningN is the trial numberE is the number of previous practice trialsα,β are the learning rate parameters

In general, the power function fit appears to be robust, regardless of the methods used (Newell and Rosenbloom 1981). However, recent work (Heathcote et al. 2000) suggests that the power law might be an artifact arising from averaging (Anderson and Tweney 1997), and that the exponential function may be the best fit when individual subjects employing a single strategy are considered. Distinguishing between the power and exponential functions is not just an esoteric exercise in equation fitting. If learning follows an exponential, then learning is based on a fixed percentage of what remains to be learnt. If learning follows a power law, then learning slows down in that it is based on an ever decreasing percentage of what remains to be learnt.

Regardless of the functional form of the practice curve, there remain some systematic deviations that cause problems, at the beginning and end of long series. The beginning deviations may represent an encoding process. For example, it may be necessary to transform a declarative description of a task into procedures before actual practice at the task can begin (Anderson and Lebiere 1998); the residuals at the end may represent approaching the minimum time for a given task as defined by an external apparatus. These effects appear in Fig. 1 as well.

View chapterPurchase book

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B0080430767014807

Population Dynamics: Mathematic Models of Population, Development, and Natural Resources

A. Fürnkranz-Prskawetz, in International Encyclopedia of the Social & Behavioral Sciences, 2001

1 Malthusian Population Dynamics and Natural Resources

The study of interactions between population and resources has a long history. According to Malthus, population growth reduces material welfare due to diminishing returns to labor on a fixed supply of land. On the other hand the higher the level of material welfare the higher the population growth rate will be. The Malthusian model predicts that ‘population will equilibrate with resources at some level mediated by technology and a conventional standard of living’ (Lee 1986). Improvements in technology will be offset in the long run by increases in the size of the population, but the standard of living will not be related to the level of technology. As such, the Malthusian model provides a description of a rather primitive society with incomes not too far above the subsistence level and where local renewable resources are an important part of the economic production process.

Renewable resources (agricultural land, forests, lakes, etc.) are not in fixed supply as Malthus assumed. Renewable resources regenerate, but if the rate of utilization (harvest) exceeds the rate of regeneration, a renewable resource will be depleted or, in the extreme case, irretrievably exhausted. By adding the dynamics of renewable resource growth to the dynamics of population growth, the Malthusian model is capable of explaining patterns of population growth and resource degradation that do not necessarily end up in a single equilibrium (Malthusian trap) (see Brander and Taylor 1998 and Prskawetz et al. 1994).

The structure of these models can best be described in terms of prey–predator dynamics, with the resources, R, being the prey and the population, P, acting as the predator. The dynamics of renewable resources (Eqn. (1)) are commonly described by the standard model of mathematical bioeconomics, where the net growth of the renewable resource, dR(t)/dt, is affected by two counteracting factors: indigenous biological growth, g(R(t)), and the harvest, H(R(t), P(t)), which depends on the stock of resources available to be harvested and the number of people who are harvesting. Indigenous resource growth is modeled by the logistic growth function g(R(t))=aR(t)(K−R(t)), where the coefficient K determines the saturation level (carrying capacity) of the resource stock (i.e., K is the stationary solution of R if the resource is not degraded) and parameter a determines the speed at which the resource regenerates. The functional form of the harvest function is determined by the prevailing economic structure, and it establishes the link between the stock of resources and population. The population growth rate, (dP(t)/dt)/P(t) (Eqn. (2)), is modeled as an increasing function of material welfare y(t)=y(P(t), H(t)), which is determined by the level of the harvest and will be reduced by population growth. Whenever material welfare falls below the subsistence level, the population will decline. These Malthusian population dynamics imply that population growth may well adapt to resource constraints in contrast to models with exogenous positive population growth, where the economy collapses if resources do not regenerate quickly enough.

(1)dRtdt=gRt−HRt,Pt

(2)dPtdt=nytPt.

Within this class of models it is possible to investigate how the equilibrium between resources and population stock changes, dependent on the functional relations that govern the dynamics of population and resources. For instance, the efficiency of harvest technology, the degree of substitution between labor and resources, the indigenous rate of resource growth and the carrying capacity of the resource stock will have an effect on material welfare and hence on population growth. In turn, fertility and mortality will affect the resource dynamics via the input of changing labor stocks in the harvest. An empirical calibration of the model for the case of the small Pacific Easter Island characterized by pronounced population fluctuations was carried out by Brander and Taylor (1998).

Contrary to the Malthusian predictions, increasing population densities might be beneficial, as argued by Boserup (1981), and could well increase the human carrying capacity of the earth. Higher population densities will initiate technological innovations in agriculture, thereby increasing the yields so that the natural environment can support a larger population without reducing the level of welfare. Similar positive feedback mechanisms are captured in a simple mathematical cartoon of the interdependence between the growth in population P(t) and carrying capacity K(t) (expressed in numbers of individuals) by Cohen (1995) as

(3)dPtdt=rPtKt−Pt

(4)dKtdt=cdPtdt

with r > 0 and c either negative, zero, or positive. The parameter c, which captures the effect of an increment of population on the carrying capacity, determines the long-term population dynamics, and it can represent technological innovation. When c=1 population size grows exponentially (Euler, eighteenth century); when c<1 population grows logistically (Verhulst, nineteenth century); and when c > 1 population grows faster than exponentially (Forester and co-workers, twentieth century).

Malthusian results (in the sense that population will grow to the point where material welfare matches subsistence demand) can be further undermined by allowing population growth to be a choice variable. Zero population growth may well be the optimal choice for individuals in the economy. Since environmental changes are often slow over the course of an individual life span, and since environmental damage may outlive its perpetrators, overlapping generation models (Eckstein et al. (1988)) provide an appropriate demographic structure. Intra- as well as intergenerational conflicts can be modeled in such a framework, taking account of the effects of increases in population and resource exploitation on the future population's quality.

The models presented so far represent traditional societies in which populations derive their living from primary occupations (agriculture, hunting, fishing, etc.) which depend on the availability of resources. But as societies become less bound to the land, more urbanized and more technological, they not only use the environment as a source of natural resources but also as a dump for waste products arising from human activity. Furthermore, an economy's production possibilities are no longer determined by the maximum sustainable yield of renewable resources. Improvements in technology can increase the sustainable yields or reduce the resource stock required for production, and economic growth will allow for the use of man-made capital in place of natural resources. In open economies with trade, technological change, and economic growth, there is no simple and direct relationship between population growth on the one hand and the environment on the other hand.

Resources might still be at risk, albeit no longer by positive population growth alone. The risk can stem from the environmental costs of consumption and production. It is therefore of great importance to understand how these environmental impacts depend on different population structures.

View chapterPurchase book

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B0080430767021112

Research and Development Costs and Productivity in Biopharmaceuticals

F.M. Scherer, in Encyclopedia of Health Economics, 2014

Reasons for Change

Several hypotheses vie to explain the apparently continuous increase in R&D costs per molecule approved. Despite advances in the technology of preclinical small-molecule screening, one might suppose that diminishing returns would set in after seven or more decades of active discovery, among other things forcing companies to focus on more difficult therapeutic targets. During the 1990s it was thought that the perfection of large-molecule gene-splicing techniques would reverse any such tendency and usher in a new golden age of pharmaceutical discovery. However, the observable changes thus far have been less than revolutionary.

There is definite evidence that clinical trial sizes have risen over time, partly as a result of tougher standards established by the US Food and Drug Administration. Also, as individual therapeutic classes became more crowded, companies may have elected to increase sample sizes to improve the statistical significance of results touted in competitive marketing. For three therapeutic categories studied by the OTA (1993), average enrollment in Phase I through III clinical trials rose from 2237 for drugs approved in 1978–83 to 3174 for 1986–90 entities, implying a median year growth rate of 4.7%. The average number of subjects drawn into Phase IV grew considerably more rapidly, from 413 to 2000 (sic), or 21% per year. Using publicly available data, DiMasi et al. (2003) estimate that average trial sizes in the 1980s and 1990s rose at a rate of 7.47% per year. In addition, the complexity of trials rose. DiMasi et al. (2003) report from an outside data source that the number of procedures administered per trial subject increased between 1990 and 1997 by 120% for Phase I trials, by 90% for Phase II trials, and by 27% for Phase III trials. Weighting the phase growth percentages by the fraction of out-of-pocket costs incurred per phase, this implies an average growth of 50% in 7 years, or 5.8% per year.

Clinical trials are mostly conducted in hospitals and similar medical centers. Over the period 1970–90, a day of hospitalization costs in the US rose at an average rate of 11% per year – nearly twice the rate at which the GDP price index was increasing. It seems reasonable to assume that in-hospital test costs rose commensurately. There is also reason to believe that major hospitals view their clinical testing activities as a ‘profit center’ and dump some of their soaring overhead costs onto the well-heeled pharmaceutical firms sponsoring clinical trials.

A more speculative hypothesis is that ‘Big Pharma’ companies have allowed organizational slack to accumulate in their R&D activities, especially after numerous large-company mergers failed to achieve substantial increases in the output of new therapeutic entities (Munos, 2009). A correction against this trend may have begun in the second decade of the twenty-first century as pharmaceutical giants such as Pfizer and Merck, acknowledging disappointment over the lagging productivity of their innovation efforts, cut back their R&D staffs in the wake of major new mergers.

View chapterPurchase book

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9780123756787012037

Higher Education and the Labor Market

J. Enders, in International Encyclopedia of Education (Third Edition), 2010

Qualitative Aspects

Since the 1990s, a new process of adjustment and restructuration seems to be underway, that tends to undermine the whole notion of a quantitative match. The perils are no longer seen in diminishing return of investment due to growing competition or in the labor markets being swamped by overqualified and dissatisfied applicants. It is nowadays more frequently underlined that it is the occupational structure and social stratification system itself that has become mobile. This is accompanied by deep structural changes in the way the economy works as well as a perceived individualization of the life-course regime (Beck, 1986). The characteristics of occupations and jobs, the vertical as well as the horizontal division of work, and the needs and reward structures of the employment system continue to be restructured. Learning–working pathways through education, training, and employment tend to be de-institutionalized and re-institutionalized. Quality thus stands for possessing a mixture of skills and knowledge for new and changing configurations. Graduates are expected to be trained for what is increasingly seen to become a market for knowledge workers in constant flux. In addition, the student body has become more heterogeneous due to ongoing massification in terms of social background, age, levels of preparation and work experience, patterns of studying and learning, aspirations, and life chances.

The issue of the relationship between higher education and the labor market and between demand and supply thus gains again in importance. However, the emphasis is less on structural–quantitative relationships and more on the qualitative match between a changing body of students and graduates and ever-changing job requirements and labor markets. In consequence, competences beyond cognitive knowledge play a stronger role in political debates as well as in scholarly work. Terms such as soft skills, key qualifications, practice orientation, and employability signal that higher education is increasingly expected to take more explicit care of competences that go beyond the codified body of knowledge related to certain disciplines, fields of study, occupations, or professions (Teichler, 1999). Moreover, research on employers' recruitment processes and selection criteria show in fact that they pay less attention to curricula details than to the reputation of certain higher education institutions or fields of studies, professional experiences of graduates during their studies, and the training and socialization of soft skills.

From this point of view, higher education is too youth centered and program oriented as far as the actual and desired clientele is concerned, and relies on the old-fashioned idea that full-time students can appropriately be piggybacked with consistent stocks of knowledge prior to the transition from education to work. Higher education is, thus, expected not only to continue considering fair access according to sociobiographic background, but also to strengthen the overall supply with a highly trained workforce in the sense of the old regime. It is also expected to further diversify structurally and, in terms of conditions of study and courses provided, to devote greater attention to generic competencies and social skills and to reshape its function for a society of lifelong learning, to prepare students for a growing internationalization, and to serve practical learning beyond classroom teaching. In other words, higher education is expected to move from a front-end model to a life-span model of education and training, from curricula to learning pathways.

Certainly, such a vast and inclusive concept of higher education in the knowledge society holds strong appeal but goes by no means uncontested. This is partly so because such an integrated view of the role of higher education in the knowledge society can serve to obscure attempts to define clearly what educational goals should be pursued and who should be responsible for which provisions and actions. It is also unclear what an appropriate balance between cognitive skills and soft skills, between workplace-related competencies and transferable skills would look like (Tuijman, 1999).

Studies among employers point, for example, to the fact that growing expectations as regards soft skills do not necessarily imply that the cognitive domain of knowledge becomes less important. Further signals for the employability rather grow in importance on top of traditional expectations in the cognitive domain. In addition, various studies show that different countries provide distinctive mixtures of qualifications, and that the returns of investment for further specialization or greater flexibility are a topic of considerable debate.

Last but not least, the lack of any satisfactory measure of the abilities and competencies of higher education graduates has thus found increasing attention. The outcomes and policy impacts of research on school effectiveness and of studies on the qualifications of students in school (e.g., the Programme for International Student Assessment (PISA) study of the Organization for Economic Co-Operation and Development (OECD)) are also stimulating this search for appropriate measures and methods to study abilities and competencies of higher education graduates. This is an area where further conceptual and analytical work is of growing interest. Various projects are undertaken to overcome this situation and to find ways for an objective assessment of the value added by higher education training and experience. This is by no means an easy undertaking but would help to learn more about suggested key qualifications or soft skills, and how they are related to the mix of hard skills and tacit knowledge traditionally provided. Further research on learning outcomes could also link up to labor market studies in order to learn more about the trainability of employability, and the evidence for the recognition, transfer, and portability of skills to and within the labor market. Further insights in this area are also likely to revitalize traditional debates between the human capital approach and the screening hypotheses that were suffering from a lack of reliable measures of graduates' qualifications.

What is the best example of the law of diminishing returns?

For example, if a factory employs workers to manufacture its products, at some point, the company will operate at an optimal level; with all other production factors constant, adding additional workers beyond this optimal level will result in less efficient operations.

Which of the following is an example of the law of diminishing returns quizlet?

M17: Which of the following is an example of the law of diminishing returns? The rate of utility gained decreases each additional hour that isabella stays up to study, because her study provides less benefit to her as she starts to get tired.

Which of the following examples best illustrates the law of diminishing marginal utility?

Correct Option b. With each additional pen Jill buys, her willingness to pay for another pen decreases. Explanation: Law of diminishing marginal utility states that as and when a consumer consumes more and more of a product, the satisfaction derived from the subsequent units consumed falls.

Toplist

Neuester Beitrag

Stichworte