Pages

Wednesday, October 5, 2011

Research Paper: Physical Development In Elementary School Age Children

Physical Development In Elementary School Age Children

Strength and coordination are two areas of physical development that seem almost to take care of themselves, which may be why they are often overlooked when curriculum planning is under way in many early childhood settings. After all, children are going to get stronger as they grow older (true); they will also become more coordinated as they grow older (also true). Unfortunately, if left to chance, children may not reach their full potential in both of these critical areas of physical development. Short of weight lifting and coordination drills, what can we do to develop children's abilities? Plenty! Young children need overall strength so that they can participate in a wide variety of activities, derive pleasure from those activities, gain confidence in their abilities to do things, and have the strength to do things--particularly new things. Coordination is another ability that begins developing on its own, as infants begin to explore their bodies and their world. By coordination, we mean a series of movements, organized and timed to occur in a particular way, that bring about a particular result.

When we start thinking about and planning for strength and coordination in young children we have to realize that, like all developmental issues, there are going to be individual differences and, in general, development is going to happen at its own rate. You cannot make development happen; you can only support it by creating the right environment for each child as he reaches a particular point on the developmental continuum. Check out your preschool playground equipment and make sure that there are other sorts of climbing activities besides playing on steps and slides. You can add tires of different sizes, placed in various patterns on the ground for "follow-the-leader" fun. Hang climbing ropes from sturdy tree limbs or swing-set frames to encourage upper-body development through climbing. (Make sure you monitor this activity closely and take the ropes down when playtime is over.) You can also rape a sand pail to each end of a 36-inch wooden dowel and have children carry different amounts of sand, water, or rocks from one place to another.
Ladders and slides aren't challenging enough for children in this age group. To add strength and coordination development opportunities, tie a one-inch natural fiber rope horizontally between two trees, about 54 inches above the ground. Let children monkey-swing on it, or ask them to try to travel hand-over-hand as far as they can along the rope. (Again, monitor this activity closely and take the rope down when playtime is over.) The next time you purchase riding toys for your wheeled-vehicle path, look for those that are hand-powered, not foot-powered. Whatever you do, don't let the children be limited by equipment. Be creative as you look for ways for children to lift things or themselves--you may be surprised at what you'll find!

Reference:

Strickland, Eric. Early Childhood Today, Oct2004, Vol. 19 Issue 2, p6-6, 1p, 1c;

Worobey, John; Pisuk, Joan; Decker, Kathleen. Public Health Nursing, Mar/Apr2004, Vol. 21 Issue 2, p122-127, 6p

Rahman, A.; Hafeez, A.. Acta Psychiatrica Scandinavica, Nov2003, Vol. 108 Issue 5, p392, 2p

Scholastic Parent & Child, Jun-Aug2003, Vol. 10 Issue 6, p40, 1p, 1 chart, 1c;

Tuesday, October 4, 2011

Term Paper: Women’s Right

In 1963, the New York Court of Appeals disrupted an entrenched male privilege in the New York Police Department (NYPD) when it cleared the way for policewoman Felicia Shpritzer and her peers to take the promotional examination for police sergeant. Nevertheless, although Shpritzer's case against the city removed a longstanding barrier to expanded roles for women, it did not argue that women were men's equals. Instead, the plaintiffs argued that supervisory duties were irrelevant to gender. Although their campaign succeeded in putting women in supervisory police roles and indicated a changed playing field favorable to women in the 1960s, the victory imbued greater ambiguity than David's slaying of Goliath. Policewomen advocates in the early 1960s continued to struggle with the liberating and debilitating potential of domestic ideals.
Women claimed that they were the ideal complements to male officers. Rather than embracing the democratic ideal of equal opportunity as Black men did in challenging their exclusion from police jobs, Black and White women pursuing expanded roles in the 1940s, 1950s, and early 1960s, more often than not, defined themselves as the alter ego of the patrolman. If the model male officer was physically imposing, combative, and heroic, they asserted, women officers be would nurturing, motherly, and protective. Similar to women making inroads into other areas of employment in this period, these women justified their new roles as a fulfillment of their feminine duties. World War II era concerns about social hygiene, morality, and female delinquency, as well as postwar concerns about delinquent boys and girls, provided women with a gendered wedge for working in the male space of New York's “mean streets.”
Policewomen advocates incrementally encroached upon the male domain of patrol work by illustrating the relevance of feminine skills such as prevention, sensitivity, communication, and child protection. By the beginning of the 1960s, the increasingly similar nature of policewomen's and policemen's duties, despite their quitedistinctly gendered justifications, led some women to question the validity of the separate job titles, “policewoman” and “patrolman.” The problem was that the femininity campaign for policewomen advocates, which had been so successful in expanding the duties of women's work in law enforcement, further entrenched the rhetorical boundaries between policewomen and patrolmen. Although individual women may have privately embraced radical equality in gender roles, few policewomen advocates before the mid-1960s eschewed the enabling languages of motherhood and domesticity.
Race further complicated the realization of women's equal treatment in the NYPD. Black women were doubly discriminated against, and soon discovered that they could not always enjoy the protection of domestic ideology. Instead they found themselves assigned to dangerous undercover work. Formulating strategies for access to privileged jobs held by men in the police department was not as simple as choosing between arguments about women's likeness to or difference from men. It required a sorting out of racialized conceptions of gender and gendered conceptions of race.

NYPD officials employed race and gender together to structure the way they policed the city. Women were neither able to patrol certain areas of the city nor were they eligible to advance to sergeant, lieutenant, or captain. However, they could make lateral moves to specialized detective units, which often involved more challenging, albeit dangerous, tasks. NYPD officials frequently encouraged Black women to pursue such jobs because they could purportedly infiltrate dangerous neighborhoods without being detected. Also, supervisors seemed to believe that Black women could handle themselves in violent situations, while White women could not. One woman reported, “you might be asked to do something White women wouldn't be asked to do. When a White sergeant was looking at me, he wasn't looking at his mother or his sister. He might send me in a hallway or roof but he would never send a White young lady.” Whether consciously or not, White male supervisors were more protective of White policewomen.
The exclusion of most White women from dangerous detective duties reveals much about what White NYPD officials, New York politicians, and other citizens believed was the proper place of “true ladies” in the 1950s. Women were to partake in preventative police work rather than crime fighting. In exploring the ideology that undergirded this assumption, it is important to lay out the historical duties and idealized roles of policewomen in this period to illustrate how they have shifted over time. Of particular import is how these roles changed during World War II and the immediate postwar era. Despite the confining rhetoric of 1950s domestic ideology, policewomen were able to manipulate that language to their advantage. Leaders of the policewomen's movement learned that they could incrementally expand the sphere of policewomen's duties by invoking a traditionally feminine ideal in which policewomen would be responsible for the domestic welfare of the streets.

“Sober, Respectable, Women!” Police Matrons At The Turn-Of-The-Century
Advocates for women's rights embraced essentialized arguments about women's natures to provide an opening for women in police work. In New York and other cities in the late nineteenth century, women's organizations such as the Women's Prison Association, the National League of Women Voters, the American Female Guardian Society, and the Women's Christian Temperance Union encouraged social reform that would allow women to serve as matrons to care for female prisoners detained in police precincts. Before the 1890s, male officers, their wives, or the maid of the police station searched female prisoners (Brown 3). Of particular concern to women's groups was the potential for sexual abuse of female detainees. To prevent such misuses of power, reformers asserted that women, as bastions of virtue, should have a hand in police work to ensure the public good and welfare (Melchionne iv).

Male prison workers, who feared that women workers would displace them, retaliated by portraying the prison as a place of degeneracy that was unfit for women workers. Agreeing with the premise of women's clubs that women were naturally virtuous and moral, but manipulating that cultural value to contest reform within the police station, the Men's Prison Association argued that “a decent sober woman could not search a female alcoholic because she would be contaminated and demoralized by her contact with such depraved creatures” (qtd in Cohn 43). In the 1880s, Commissioner Vorhis opposed the use of women as police matrons because he thought that the job was too physically demanding for them. Policewomen in the 1900s, hearing similar claims from male patrol officers, would note that such objections were based on a fear that women would displace men in a field that had hitherto been considered a masculine domain (Cohn 43). In response to such claims, women's groups promised to ensure that candidates for the job of matron would be “of good moral character” and that they would secure letters of recommendation from 20 “respectable” women before they were appointed (Melchionne 37). Most men and women agreed that women ought to be models of virtue and morality, but disputed the implications of that outlook for women's employment.
The impingement of police matrons upon the terrain of male police work was neither understood nor promoted by women's activists as a case of women performing men's jobs. Rather, they saw themselves redefining the nature of a small aspect of police work as feminine. From their point of view, moral norms dictated that it was appropriate for women to care for women prisoners. They did not wish to challenge the conventional wisdom that men and women were physically, intellectually, or emotionally different. Indeed, it was this very difference that justified the limited incorporation of women into the station as matrons. Women's roles in police work in the NYPD were relatively static until World War I. This would be the case for women aspiring to police work in the future. Individuals could stretch the boundaries of their duties under creative titles and gain access to more interesting and challenging tasks. One exceptional case was Isabella Goodwin, who worked under the tide of matron at the Mercer Street Station from 1895 to 1912. While serving as a matron, Goodwin made a number of shrewd observations about women prisoners, which led to a supervisor's suggestion that she try her hand at detective work. They quickly realized that a woman could go undetected while investigating criminal activities. So, Goodwin was assigned detective duties by her male supervisors. She gathered evidence against fortunetellers, investigated banking scams and extortion rackets, and exposed fake medical practitioners. When Commissioner Dougherty learned of her work, he appointed her to the position of Detective First Grade, formalizing her position and more than doubling her salary. While Goodwin was exceptional, it was common for police departments to use matrons in capacities other than guarding female prisoners (Segrave 11). It was obvious to police officials that some tasks were more suitable for women. However, like other police matrons and their advocates, Goodwin remained guarded about her femininity, noting that her success in police work was due to her women's intuition,“ and that it took a toll on her work at home, where “a woman's duty is first and foremost.” Reifying her domestic primacy assured Goodwin protection from potential critics of her “public” work as a police detective.
More public roles in police work became possible for increasing numbers of women due to the physical and social dislocations of men and women during World War I. The war created the conditions under which women could make new claims about their relevance to police work. Three significant and interrelated phenomena led to these conditions: the number of men fighting overseas, the vacancy left in traditional men's jobs, and the social stresses on families due to absent parents. The incorporation of women into industry compounded the difficulties of family organization created by the heavy recruitment of men into the armed forces. Of particular concern to some Americans during and immediately after the war was the perceived decline in public morals and waywardness of America's youth. Many citizens feared that the concentration of young men at the new military recruiting centers posed dangers to vulnerable young girls. Since most politicians already viewed women as the guardians of public virtue, it made sense to solicit their help in this morality crusade.

The NYPD defended its use of women as auxiliary police reserves by pointing to women's successful substitution for men in other industries and political and social spheres. It was also a matter of a labor shortage. The city had already organized a police reserve of male officers to replace the soldiers, but by May 1918, it was clear that they were still understaffed. In order to fill the vacancies Mayor John Hylan organized the Committee of Women on National Defense, which established a small unit of women as “protective officers.” Special Deputy Commissioner Rodman Wanamaker, head of the Police Reserves, constructed a tenuous defense of the newly hired women. He said, “New York women have the vote,” and therefore “they should have an active part in enforcing the laws.” Despite such treatments of women as potential equals, Wanamaker justified the department's inclusion of them by focusing on their responsibilities for problems relating to youth and sexuality. Furthermore, he made it clear that their service was to be both “temporary and voluntary.” Wanamaker divided the city into zones that included a number of precincts, and assigned policewomen to each zone to patrol and look after the welfare of young girls who might be found in the company of men in secluded place such as parks and beaches (Melchionne 95). Although Wanamaker granted women new crime-fighting responsibilities, he reserved for men the more critical work of “rough and violent lawbreakers.” As their title suggested, protective officers were assigned to do preventative rather than punitive work. Although vested with the power to do so, the more than 5,000 recruits made no arrests, contenting themselves to report to their superior officers only the most flagrant cases of disorder among soldiers. Fulfilling their duties as moral guardians, women protective officers scouted the streets, parks, camps, armories, recruiting stations, dance halls, motion-picture theaters, and amusement parks. Likewise, they conducted investigative work only in places where young girls might be exploited-furnished rooming houses, places of questionable employment, restaurants, and railroad terminals (Melchionne 58).


Reference:

Crocco, Margaret Smith. Social Studies, Nov2007, Vol. 98 Issue 6, p257-269, 13p, 1 map

Ronan, Marian. Journal of Feminist Studies in Religion (Indiana University Press), Fall2007, Vol. 23 Issue 2, p149-169, 21p;

MacLean, Nancy. OAH Magazine of History, Oct2006, Vol. 20 Issue 5, p19-23, 5p;

Stoever, Jennifer Lynn. Social Identities, Sep2006, Vol. 12 Issue 5, p595-613, 19p

Lamont, Victoria. Canadian Review of American Studies, 2006, Vol. 36 Issue 1, p17-43, 27p, 1bw

Monday, October 3, 2011

Essay: The Child’s Brain

The Child’s Brain

A child's brain is a magnificent engine for learning. A child learns to crawl, then walk, run and explore. A child learns to reason, to pay attention, to remember, but nowhere is learning more dramatic than in the way a child learns language. As children, we acquire language -- the hallmark of being human. In nearly all adults, the language center of the brain resides in the left hemisphere, but in children the brain is less specialized. Scientists have demonstrated that until babies become about a year old, they respond to language with their entire brains, but then, gradually, language shifts to the left hemisphere, driven by the acquisition of language itself. But if the left hemisphere becomes the language center for most adults, what happens if in childhood it is compromised by disease? Brain seizures such as those resulted by epilepsy and Rasmussen's syndrome, have a devastating effect on brain development in some children. Minor developmental delays early in life, like beginning to walk later than average, may forecast alcoholism, according to a new study. The authors suggest that such problems with early childhood brain development may in fact contribute to the disease.
The brain's cerebellum plays a crucial role in motor development and the control of fine, coordinated movements such as walking and playing musical instruments. Some researchers have proposed that the region is also involved in impulse control and that a dysfunctional cerebellum may therefore predispose to addiction. This theory led pharmacologist Ann Manzardo of the University of Kansas Medical Center in Kansas City, Kansas, to ask whether variations in early motor performance might predict alcoholism later in life. To test the theory, Manzardo's team analyzed data from a well-known Danish alcoholism study that followed 330 baby boys — two thirds of whom had alcoholic fathers — through their 40s. Looking at motor development and the frequency of alcoholism in the subjects at age 30, Manzardo and her team discovered that 77% of the alcoholics had not yet been able to walk at 12 months of age, compared to 43% of nonalcoholics. Because the cerebellum is involved in motor development, Manzardo says the region may be an additional marker for alcoholic tendencies. As such, she says, "we need to focus more on early childhood brain development to see if there are contributing factors to the development of alcoholism."

Sunday, October 2, 2011

Research Paper: Effects Of Sleep Deprivation

Literature Review - Effects Of Sleep Deprivation

Normal, healthy individuals need adequate sleep for optimal cognitive functioning (Himashree et al., 2002). Without adequate sleep, humans show reduced alertness (Penetar et al., 1993) and impairments in cognitive performance (Thomas et al., 2000, 2003). Prolonged sleep deprivation is associated with decrements in elementary cognitive abilities such as vigilance and sustained attention (Doran et al., 2001; Wesensten et al., 2004), as well as impairments in complex, higher-order cognitive processes such as verbal fluency, logical thought, decision making, and creativity (Harrison & Horne, 1997, 1999, 2000). In occupational settings such as aviation, air traffic control, and sustained military operations where constant vigilance is a necessity, extended periods of sleep loss have been associated with catastrophic accidents (Mitler et al., 1988) and may have been a factor in some friendly fire incidents (Belenky et al., 1994). Studies of sleep-deprived individuals show that errors attention begin to emerge by 19 h of continuous wakefulness (Russo et al., 2004) and cognitive performance declines at a rate of approximately 25% for each 24-h period of wakefulness (Belenky et al., 1994).

Sleep deprivation produces global decreases in cerebral metabolism and blood flow, with the greatest declines evident in those regions critical for higher order cognitive processes (Thomas et al., 2000). These regions, the heteromodal association cortices, are associated with attention, vigilance, and complex cognitive processing, and reductions in activity within these regions are associated with decrements in these higher-order cognitive process (Mesulam, 1999). As a global blood flow and metabolic activity decline during prolonged periods without sleep, the brain appears to compensate by recruiting cognitive resources from nearby brain regions within the prefrontal and parietal cortices in order to maintain cognitive performance at acceptable levels (Drummond et al., 2001). Some evidence suggests that these compensatory activities may be particularly prominent within the right cerebral hemisphere (Drummond et al., 2001). Consistent with these reports, other studies suggest that cognitive  processes mediated by the right hemisphere are more adversely affected by sleep deprivation than those mediated by the left (Johnsen et al., 2004; Pallesen et al., 2004).
Neuropsychological evidence suggests that the right cerebral hemisphere is dominant for attentional processes (Heilman&Van DenAbell, 1980; Mapstone et al., 2003). Much of the evidence supporting the dominance of the right hemisphere in attention comes from studies of patients with unilateral brain damage (Heilman & Van Den Abell, 1980; Weintraub & Mesulam, 1987). Lesions to the right cerebral hemisphere are more likely to produce contralateral hemispatial neglect than similar lesions to the left hemisphere (Behrmann et al., 2004; Mapstone et al., 2003, Mesulam, 1999). Further evidence supporting the prominent role of the right hemisphere in attentional processing comes from several functional neuroimaging studies that reveal greater right hemisphere activity in response to tasks requiring allocation of spatial attention (Fink et al., 2001; Macaluso et al., 2001). The accumulating evidence suggests that the left cerebral hemisphere allocates its attentional processing predominantly toward the contralateral (i.e., right) hemispace, whereas the right hemisphere appears to distribute attentional processing more equally between both hemispaces, and is therefore considered dominant for attention (Mesulam, 1999). Consequently, the phenomenon of contralesional neglect occurs nearly exclusively following lesions to the right hemisphere.
Given the apparently greater role of the right hemisphere in attentional processing and the preliminary evidence that the cognitive processes mediated by the right hemispheremay be more sensitive to the detrimental effects of sleep deprivation, it was hypothesized that prolonged sleep loss results in greater impairment of right hemisphere visual attention mechanisms oriented toward the contralateral (i.e., left) perceptual hemispace. Participants were assessed several times each day while remaining awake for 40 h. During each 15-min testing session, participants monitored a 150◦ arc of lateral visual space for periodic occurrences of brief flashes of light while simultaneously performing a continuous serial addition task.
Adequate sleep is important for both good mental and physical health. Poor sleep quality is a significant predictor of depressed mood (Mendlowicz, Jean-Louis, von Gizycki, Zizi & Nunes, 1999). Sleep deprivation has been shown to worsen depressive symptoms in some individuals (Benedetti, Zanardi, Colombo & Smeraldi, 1999; Beutler, Cano, Miro & Buela-Casal, 2003) and increase disturbed mood (Crabbe, 2002; Dinges et al., 1997). Sleep deprivation can also result in increased anxiety (Miro, Cano-Lozano, Espinosa & Buela-Casal, 2002), fatigue, confusion, and tension (Dinges et al., 1997). Furthermore, sleep deprivation affects mood to a greater degree than either cognitive or motor performance (Pilcher & Huffcutt, 1996). Regarding physical health, poor sleep quality and sleep loss are associated with decreased immune function (Cruess et al., 2003; Irwin, 2002), the pathophysiology of cardiovascular disease and diabetes (Roost & Nilsson, 2002), and also the development of overweight/obesity (Agras, Hammer, McNicholas & Kraemer, 2004).

Sleep deprivation also influences food consumption in studies of animals, although these studies have shown some conflicting results. For example, studies with rats have shown that sleep deprivation may lead to over eating (Brock et al., 1994; Tsai, Bergmann & Rechtschaffen, 1992). On the other hand, Johansson and Elomaa (1986) found a reduction in the amount of food consumed by rats when deprived of rapid eye movement (REM) sleep. In addition, some studies also demonstrated that sleep deprivation disturbs the light/dark eating pattern in rats rather than simply increasing or decreasing food intake (Elomaa, 1981; Martinez, Bautista, Phillips & Hicks, 1991). Overall, sleep deprivation seems to alter eating patterns among animals. There are relatively few studies on the effects of sleep on food consumption or food choice in humans, but several pieces of indirect evidence exist to suggest a link between sleep and food consumption. Hicks, McTighe and Juarez (1986) found that short-sleeping college students (e.g., 6 h per night) were more likely to eat more small meals or snacks than long-sleepers who averaged 8 h or more of sleep per night. There is also evidence showing that individuals with eating disorders display abnormal sleep patterns. For example, Latzer, Tzischinsky, Epstein, Klein and Peretz (1999) found that women with bulimia nervosa reported more difficulty falling asleep, more early waking, more headaches on awakening, and more daytime sleepiness than women without bulimia.

Additional evidence for an association between sleep and eating comes from studies of the hypothalamic pituitary adrenal (HPA) axis stress hormone cortisol and other studies of psychosocial stress. There is a negative association between amount of REM sleep and cortisol levels (Lauer et al., 1989) and a positive association between cortisol levels and calories consumed (Epel, Lapidus, McEwen & Brownell, 2001). In addition, sleep loss may be thought of as a source of stress for some individuals, which may subsequently influence food choice and food consumption as well. Increases in stress lead to more snacking and a decrease in the consumption of typical meal-type foods (Oliver & Wardle, 1999). In sum, there is some evidence that loss of sleep, as a stressor, may influence eating patterns, but, to date, no study has examined the effects of sleep restriction on food choice and consumption. A study examined the association between self-imposed sleep deprivation and eating among a sample of college students. We hypothesized that individuals would change their pattern of calorie consumption on the day following partial sleep deprivation. Due to the lack of conclusive evidence, as discussed above, we did not make an a priori hypothesis regarding the direction of change in calorie intake. We also predicted that individuals would choose foods differently following partial sleep deprivation; specifically, in concordance with the Oliver and Wardle (1999) study mentioned above, we predicted that they would choose foods based less on health and weight control and based more on mood and convenience.

In that study, the effects of self-induced partial sleep deprivation among an undergraduate sample were examined. The results showed significant differences in food consumption and food choice following partial sleep deprivation as compared to nights of normal sleep. As expected, there was a change in food consumption, as measured by calories consumed, following a night of partial sleep deprivation. We found that consumption of calories decreased after sleep loss as shown in Johansson and Elomaa’s (1986) study with rats. It is noteworthy to point out that the decrease in calories did not become statistically significant until two days after sleep deprivation rather than the day after. It could be argued that this indicates that sleep deprivation was not the cause of this decline in calories, but that some other factor played a role. One possible explanation is that people consume more calories following the weekend and eat less as the weekend approaches. However, it is important to note that the decrease in calories did not begin until after sleep loss. Also, some participants began the diaries on Monday while others began on Tuesday, making it less likely that the finding was due only to the time frame of the study. Other explanations for the observed decrease in calorie consumption could include diary fatigue and increased awareness of intake. Diary fatigue could have resulted in the participants eating the same amount but recording less in the diary or they could have actually consumed less because of an aversion to writing in the diary. Similarly, a heightened awareness of calorie intake could have led to a decrease in food consumption due to health or weight concern reasons. Due to the fact that there was no control group that kept diaries but did not experience sleep loss, the decrease in calories cannot be attributed solely or exclusively to sleep deprivation.

Reference:

Agras, W., Hammer, L., McNicholas, F., & Kraemer, H. (2004). Risk factors for child overweight: A prospective study from birth to 9.5 years. Journal of Pediatrics, 145, 20–25.

Attie, I., & Brooks-Gunn, J. (1989). Development of eating problems in adolescent girls: A longitudinal study. Developmental Psychology, 25, 70–79.

Backhaus, J., Junghanns, K., & Hohagen, F. (2004). Sleep disturbances are correlated with decreased morning awakening salivary cortisol. Psychoneuroendocrinology, 29, 1184–1191.

Benedetti, F., Zanardi, R., Colombo, C., & Smeraldi, E. (1999). Worsening of delusional depression after sleep deprivation: Case reports. Journal of Psychiatric Research, 33, 69–72.

Beutler, L. E., Cano, M. C., Miro, E., & Buela-Casal, G. (2003). The role of activation in the effect of total sleep deprivation on depressed mood. Journal of Clinical Psychology, 59, 369–384.

Brock, J. W., Farooqui, S. M., Ross, K. D., Payne, S., & Prasad, C. (1994). Stress-related behavior and central norepinephrine concentrations in the REM sleep deprived rat. Physiology and Behavior, 55, 997–1003.

Buysse, D. J., Reynolds, C. F., Monk, T. H., Berman, S. R., & Kupfer, D. J. (1989). The Pittsburgh Sleep Quality Index: A new instrument for psychiatric practice and research. Psychiatry Research, 28, 193–213.

Crabbe, J. B. (2002). Effects of cycling exercise on mood and brain electrocortical activity after sleep deprivation. Dissertation Abstracts International: Section B: The Sciences & Engineering, 62, 3967.

Cruess, D. G., Antoni, M. H., Gonzalez, J., Fletcher, M. A., Klimas, N., Duran, R., et al. (2003). Sleep disturbance mediates the association between psychological distress and immune status among HIV-positive men and women on combination antiretroviral therapy. Journal of Psychosomatic Research, 54, 185–189.

Dinges, D. F., Pack, F., Williams, K., Gillen, K. A., Powell, J. W., Ott, G. E., et al. (1997). Cumulative sleepiness, mood disturbance and psychomotor vigilance performance decrements during a week of sleep restricted to 4–5 hours per night. Sleep: Journal of Sleep Research & Sleep Medicine, 20, 267–277.

Elomaa, E. (1981). The light/dark difference in meal size in the laboratory rat on a standard diet is abolished during REM sleep deprivation. Physiology and Behavior, 26, 487–493.

Epel, E., Lapidus, R., McEwen, B., & Brownell, K. (2001). Stress may add bite to appetite in women: A laboratory study of stress-induced cortisol and eating behavior. Psychoneuroendocrinology, 26, 37–49.

Saturday, October 1, 2011

Term Paper: Bloods And Crips and Hells Angels

In Los Angeles and other urban areas in the United States, the formation of street gangs increased at an alarming pace throughout the 1980s and 1990s. The Bloods and the Crips, the most well-known gangs of Los Angeles, are predominately African American [1] and they have steadily increased in number since their beginnings in 1969. In addition, there are approximately 600 Hispanic gangs in Los Angeles County with a growing Asian gang population numbering approximately 20,000 members. Surprisingly, little has been written about the historical background of black gangs in Los Angeles (LA). Literature and firsthand interviews with Los Angeles residents seem to point to three significant periods relevant to the development of the contemporary black gangs. The first period, which followed WWII and significant black migrations from the South, is when the first major black clubs formed. After the Watts rebellion of 1965, the second period gave way to the civil rights period of Los Angeles where blacks, including those who where former club members who became politically active for the remainder of the 1960s. By the early 1970s black street gangs began to reemerge. By 1972, the Crips were firmly established and the Bloods were beginning to organize. This period saw the rise of LA�s newest gangs, which continued to grow during the 1970s, and later formed in several other cities throughout the United States by the 1990s. While black gangs do not make up the largest or most active gang population in Los Angeles today, their influence on street gang culture nationally has been profound.

In order to better understand the rise of these groups, I went into the original neighborhoods to document the history which led to these groups. There are 88 incorporated cities and dozens of other unincorporated places in Los Angeles County (LAC). In the process of conducting this research, I visited all of these places in an attempt to not just identify gangs active in Los Angeles, but to determine their territories. Through several weeks of field work and research conducted in 1996, I identified 274 black gangs in 17 cities and four unincorporated areas in LAC.  The first major period of black gangs in Los Angeles began in the late 1940s and ended in 1965. There were black gangs in Los Angeles prior to this period, but they were small in numbers; little is known about the activity of these groups. Some of the black groups that existed in Los Angeles in the late 1920s and 1930s were the Boozies, Goodlows, Blogettes, Kelleys, and the Driver Brothers. Most of these groups were family oriented, and they referred to themselves as clubs.[2] Max Bond (1936:270) wrote briefly about a black gang of 15-year-old kids from the Central Avenue area that mostly stole automobile accessories and bicycles. It was not until the late 1940s that the first major black clubs surfaced on the East side[3] of Los Angeles near Jefferson High School in the Central Avenue area. This was the original settlement area of blacks in Los Angeles. South of 92nd Street in Watts and in the Jefferson Park/West Adams area on the West side, there were significant black populations. By 1960 several black clubs were operating on the West side[4] of Los Angeles, an area that had previously restricted black residents during the 1940s.

Several of the first black clubs to emerge in the late 1940s and early 1950s formed initially as a defensive reaction to combat much of the white violence that had been plaguing the black community for several years. In the surrounding communities of the original black ghetto of Central Avenue and Watts, and in the cities of Huntington Park and South Gate, white Angelenos were developing a dissatisfaction for the growing black population that was migrating from the South during WWII. During the 1940s, resentment from the white community grew as several blacks challenged the legal housing discrimination laws that prevented them from purchasing property outside the original settlement neighborhoods and integrate into the public schools. Areas outside of the original black settlement of Los Angeles were neighborhoods covered by legally enforced, racially restrictive covenants or deed restrictions. This practice, adapted by white homeowners, was established in 1922 and was designed to maintain social and racial homogeneity of neighborhoods by denying non-whites access to property ownership. By the 1940s, such exclusionary practices made much of Los Angeles off-limits to most minorities (Bond 1936; Davis 1990:161,273; Dymski and Veitch 1996:40). This process contributed to increasing homogeneity of communities in Los Angeles, further exacerbating racial conflict between whites and blacks, as the latter existed in mostly segregated communities. From 1940 to 1944, there was over a 100 percent increase in the black population of Los Angeles, and ethnic and racial paranoia began to develop among white residents. Chronic overcrowding was taking a toll, and housing congestion became a serious problem, as blacks were forced to live in substandard housing (Collins 1980:26). From 1945-1948, black residents continually challenged restrictive covenants in several court cases in an effort to move out of the dense, overcrowded black community. These attempts resulted in violent clashes between whites and blacks (Collins 1980:30). The Ku Klux Klan resurfaced during the 1940s, 20 years after their presence faded during the late 1920s (Adler 1977; Collins 1980), and white youths were forming street clubs to battle integration of the community and schools of black residents.
In 1943, conflicts between blacks and whites occurred at 5th and San Pedro Streets, resulting in a riot on Central Avenue (Bunch 1990:118). white clubs in Inglewood, Gardena, and on the West side engaged in similar acts, but the Spook Hunters were the most violent of all white clubs in Los Angeles. Between 1973 and 1975, several the non-Crip  gangs decided to form a united federation, as many Crip  gangs began indulging in intra-racial fighting with other black non-Crip gangs. Because of the sheer numbers that the  Crips were able to accumulate through heavy recruitment, they were easily able to intimidate and terrorize other non-Crip  gangs, resulting in one of the first Crip  against Blood gang-related homicides. A member of the LA Brims, a West side independent gang, was shot and killed by a Crip  member after a confrontation (Jah & Keyah 1995:123). This incident started the rivalry between the Crips  and the Brims. The Piru Street Boys (non-Crip  gang) in Compton had severed their relations with the Compton Crips after a similar confrontation, and a meeting was called on Piru Street in Compton where the Blood  alliance was created. Throughout the mid-1970s the rivalry between the Bloods  and Crips grew, as did the number of gangs. In 1974 there were 70 gang-related homicides in Los Angeles, and by 1978, there were 60 black gangs in Los Angeles, 45 Crip  gangs, and 15 Blood gangs. By 1979, at the age of 26, the founder of the Crips  was murdered, Crip infighting was well-established, and gang crime became more perilous. The county reported 30,000 active gang members in 1980 (Table 1.1), and gang murders reached a record high 355 (Table 1.2). The Los Angeles District Attorney�s office and the Hard Core Gang Unit began to focus their resources on prosecuting gang-related offenses during this time (Collier & Horowitz 1983: 94). From 1978 to 1982, the number of black gangs grew from 60 to 155 (See chapter 5), and by 1985 gang homicides were reaching epidemic proportions after a brief lull of activity during the Olympics of 1984. The epidemic of gang-related crime and homicides continued to soar throughout the 1980s, peaking in 1992 with 803 gang-related homicides.

In three years, after the first Crip gang was established in 1969, the number of black gangs in Los Angeles had grown to 18. Table 1 reveals that in each year where gang territory data was available, the growth in the number of gang territories was significant. In the six years between 1972 and 1978, 44 new black gangs formed, and only two gangs became inactive. In the 14 years between 1982 and 1996, 150 new gangs formed. However, the most dramatic growth was in the four years between 1978 and 1982 when 101 new gangs formed. In addition to the number of gang territories increasing, the spatial distribution of gang territories changed during these years, penetrating several new places within Los Angeles County. The dramatic increase in the number of gangs from 1978 to 1982, which was most evident in Los Angeles, Compton, and Inglewood, occurred during the same time when unemployment was rising because of plant closures. A major phase of deindustrialization was occurring in Los Angeles that resulted in 70,000 workers being laid off in South Los Angeles between 1978 and 1982, heavily impacting the black community (Soja et al. 1983: 217). Unemployment at the expense of base closures and plant relocations has been linked, among other factors, to persistent juvenile delinquency that has led to gang development (Klein 1995: 103,194). Spergel found that gangs where more prevalent in areas where limited access to social opportunities and social disorganization, or the lack of integration of key social institutions including youth and youth groups, family, school, and employment in a local community, were found (1995:61). Also the type of community was believed to influence the prevalence of gangs, and neighborhoods with large concentrations of poor families, large number of youths, female-headed households, and lower incomes were key factors (Covey et al. 1997:71). In addition, poverty that is associated with unemployment, racism, and segregation is believed to be a foremost cause of gang proliferation (Klein 1995: 194). These conditions are strongly associated with areas plagued by poverty, rather than the suburban regions identified in this study.

By the mid 1990s there were an estimated 650,000 gang members in the United States (U.S. Department of Justice 1997), including 150,000 in Los Angeles County (Figure 1.1). In addition, in 1996 there were over 600 Hispanic gangs in Los Angeles County along with a growing Asian gang force of about 20,000. With gang membership increasing, gang-related homicides in Los Angeles County reached epidemic proportions for black and Hispanic males that represented 93 percent of all gang-related homicide victims from 1979 to 1994 (Hutson, et al. 1995). From 1985 to 1992, gang-related homicides had increased in each of the eight consecutive years (Figure 1.2). However, the year following the Los Angeles Civil Unrest of 1992, there was a ten percent drop in homicides, the first reduction in gang-related homicides in Los Angeles since 1984. This drop in killings was the result of a gang truce implemented by the four largest gangs in Watts, the Bounty Hunters, the Grape Streets, Hacienda Village, and PJ Watts (Perry 1995:24). In 1992, shortly before the urban unrest of April 29, 1992, a cease-fire was already in effect in Watts, and after the unrest, a peace treaty was developed among the largest black gangs in Watts. Early on, the police started to credit the truce for the sharp drop in gang-related homicides (Berger 1992).

Notes:

[1] A majority of the Crips and Bloods in Los Angeles are African American with the exceptions of a Samoan Crip gangs active in Long Beach, a Samoan Blood gang active in Carson, an Inglewood Crip gang with mostly members of Tongan descent, and a mixed Samoan/black gang active in Compton. With the exception of these four gangs, Crips and Blood gangs are predominately African American.

[2] The groups during this time identified themselves as clubs, but the police department often characterized these groups as gangs.

[3] The East side of Los Angeles refers to the areas east of Main Street to Alameda in the City of Los Angeles. This area includes Watts, and the unincorporated area of Florence. It does not include East LA, Boyle Heights, or other eastern portions of the city. Those areas are usually referred to by their specific names.

[4] The West side of Los Angeles refers to the areas west of Main Street, an area that was off limits to blacks in the 1940s. Through time, though, the border between east and west has moved slightly west in the �mental maps� of those who lived in this area. Later Broadway became the infamous border, and later again the Harbor 110 freeway became the border. Some today consider Vermont Avenue the division between the West side & East side. Gangs have always identified geographically to either East side or West side and they have maintained the use of Main Street as their point of division between the two.

[5] Main Street was the street that bounded the Central Avenue community to the west, but over time, this boundary would move further west. Success to move out of the ghetto occurred in a westerly direction, and over time, Broadway became the boundary, then later Vermont.

[6] Personal interview with Raymond Wright.

[7] Organization was a Los Angeles based black political cultural group from the 1960�s that was under the leadership of Ron Karenga (also known as Maulana Karenga).

[8] Interview with Danifu in 1996.










Reference:
Razaq, Rashid. Evening Standard, 6/8/2007, p5, 1/2p

Conor Lally, Crime Correspondent. Irish Times, 06/01/2007;

Hagedorn, John M.. Journal of African American History, Spring2006, Vol. 91 Issue 2, p194-208, 15p;

Kushner, David. World Almanac & Book of Facts, 2006, p9-9, 1p;

Valdez, Avelardo. Journal of Drug Issues, Fall2005, Vol. 35 Issue 4, p843-867, 25p;

Straight, Susan. Nation, 8/15/2005, Vol. 281 Issue 5, p25-29, 5p, 1bw;