Not Happily Ever After: Why the Disney Renaissance Ended…

disney

With the recent slate of Disney films being released to theaters, it could be mistaken that we’re in the 1990s again. In the past two years, Disney has released live action remakes of Beauty and the Beast, Aladdin, and The Lion King. Next year Mulan will follow suit. The Hunchback of Notre Dame is officially in the works now too. All of these films are based on blockbuster animated features from Disney that premiered in the 1990s.

They were part of the illustrious era of animated feature films known as the Disney Renaissance—from 1989 to 1999. And frankly, it’s easy to see why these films still seem to be relevant—chronologically and culturally: the generation that grew up with these films, mostly Millennials born between the early 1980s through the mid 1990s, have yet to reach age forty at most—and barely finished college at the least. They’re also a notoriously nostalgic group, like most Americans—but with an even more ubiquitous platform to perpetuate it—on social media and the Web.

They grew up with the Disney Renaissance and have ostensibly kept the relevance of its animated features alive—literally two decades after its official end. With at least half of the films from the era slated for reinterpretation as major live action films, their enduring grasp on popular culture is unquestionable.

As a member of this generation and a bona fide Disney buff, I cant attest that the memory of these films are indeed as vivid and fresh today as they were when I was at the target age for their appeal. They are touchstones of my childhood, imbuing me with a sense of wonder and a worldview that I still hold onto to this day.

Nonetheless, the Disney Renaissance did have a clear beginning and end. It veritably ceased to produce more new films to augment its original run, after 1999. As any Disney fan or even a casual observer will recall, subsequent animated features from Disney experienced a steep drop in popularity for nearly a decade afterwards, in spite of a continual output of releases then.

As a fan with unrelenting exposure to these animated films, I have arrived at a conclusion as to why the phenomenon of the Disney Renaissance drew to a close at the end of the last century.

My theory is rooted in the catalyst that started the Disney Renaissance and made it popular in the first place. Kick-starting the era in 1989, The Little Mermaid was the first fairy tale that the Disney studio had worked on in thirty years. This was largely why it was a resounding success, because it returned to the formula that had worked so well for Disney in the past: a fairy tale musical, centered on a princess. Disney had erroneously abandoned this formula for nearly two decades prior to this, and suffered commercially and artistically with audiences.

Per hindsight, however, I believe the rediscovery of this Disney formula during the Renaissance era would also become its own undoing. If the era had one fault, it was that literally every film adhered to the formula, stringently, with the exception of The Rescuers Down Under, an early entry to the era that proved to be a commercial misfire, tellingly. Every other animated feature between 1989 and 1999 consisted of: a beautiful or outcast protagonist—usually orphaned by one or both parents, amusing sidekicks, romantic interest, colorful villain, an epic setting, and a roster of songs that covered the requisite themes of life goals, romantic love, and villainy. This is not to dispel the considerable diversity of creative and technical achievement of the era—producing some of the most ingenious, memorable and astonishing feats of song, visual, and characters—not to mention an unprecedented level of representation for the first time from Disney (lead characters of different ethnicities: Aladdin, Pocahontas, Mulan).

Nonetheless it’s quite revealing to see that, when compared to previous eras of Disney animated features, no other era could be accused of the same homogeneity: the Golden Age, from 1937 to 1942, only had two films that featured romantic love, only one princess, and two clear musicals. The Silver Age had several films without romantic love or a musical format as well. These two eras are arguably the biggest rivals of the Renaissance, in popularity and artistic achievement. Both reached a demise for their own disparate reasons—economical downturn after 1942 due to World War II, and the death of Walt Disney in 1966, respectively.

The theory of redundancy during the Disney Renaissance had also possibly begun to take shape as early as the mid-way point of its run: after four stellar blockbusters from the studio, things suddenly slowed down with the release of Pocahontas in 1995. Widely viewed as the first critical let down of the era, things didn’t immediately return to form in 1996 or 1997 either, with the releases of The Hunchback of Notre Dame and Hercules. Both films failed to draw audiences back after the misstep of Pocahontas. Something was amiss.

Audiences were being drawn elsewhere too: computer animation. This is perhaps the most commonly held theory of why the Disney Renaissance came to an end: towards the end of the 1990s, a new medium was dawning—usurping traditional, hand-drawn (2-D) animation that Disney was known for. With the release of Toy Story in 1995, a resounding success not just for being the first major computer-animated feature but a true triumph of story, audiences found a new outlet for family-friendly films that appealed to all ages. A slew of computer-animated (or CGI) films followed in gradual succession for the rest of the decade and beyond, none of which followed the renowned Disney formula—and often to critical and commercial success, surpassing even Disney. If the Disney Renaissance proved that animated features could appeal to all ages, CGI animated films proved that they didn’t have to be based on classic, existing literature—opening the doors for innovations in story that just happened to be married to a very innovative technology, now coming to its own at the end of the twentieth century.

Although I agree that CGI certainly disrupted Disney’s momentum in the latter half of the 1990s—particularly since CGI animated features have ostensibly remained more popular with audiences and critics alike, and 2-D animation has never come back into vogue since—I still stand by my theory that it was more of content than just medium. Also, the onslaught of CGI feature-length films actually occurred rather slowly, and did not immediately crowd the market that 2-D Disney animated features dominated: after Toy Story was first released in 1995, the next CGI films were Antz and A Bug’s Life, both premiering in 1998. That left three full years in between, which subsequently saw the release of three Disney animated features to vigorously fill the void to maintain their stronghold on audiences—yet they didn’t. The Hunchback of Notre Dame, Hercules, and Mulan were released by Disney during this period and though not critical or commercial failures, were far less renowned than their predecessors from the Disney Renaissance. Again, a malaise seemed to have settled with audiences, which could be read as a reflection of the medium’s output. Audiences surely weren’t just holding off for the next great CGI film, after only having witnessed the medium’s sole initial output in 1995. The average moviegoer had no idea how the CGI medium would eventually fare, though it was clearly a technological advancement. (It wasn’t until 2001, that the medium exploded with simultaneous releases of multiple CGI animated films, cementing it as a mainstay in cinema).

It was apparent that audiences had simply grown tired of the Disney formula, and so the business changed after 1999, just as it did after the Silver Age in 1967—when the studio entered a prior era of commercial and critical malaise, following the death of Walt Disney.

With that, it’s also helpful to understand what followed the Disney Renaissance: from 2000 to 2008, the Post Renaissance era indeed echoed the era that followed the Silver Age—the Bronze Age of 1970-1988, when the studio struggled to redefine its identity to audiences then too. The resulting films in the new millennium would reflect these efforts, striking into new territories such as Science Fiction, original stories not based on classic tales, even the CGI medium as well—which would be a portent of the studio’s eventual direction. Most of the films from this era didn’t quite resonate enough with audiences to turn them into classics.

The Revival era followed in 2009, with yet another rediscovery of the formula—with Tangled cementing Disney’s return to the zeitgeist, followed by Frozen. Both were clearly fairy-tale musicals centered on a princess, but married to the new CGI medium now, which Disney has converted to indefinitely to fit with the times. Aside from the new look, these films are quite similar to the Renaissance formula. Audiences responded and propelled these films into the public conscience as they did in the Renaissance era, hence the new namesake.

But if these new films from the Revival are following the same virtual formula as the Renaissance, why did the Renaissance cease in the first place? Shouldn’t it have endured, unabated, by sheer demand?

Again: we just needed a break. As a lifelong Disney fan, with the benefit of hindsight, I couldn’t fathom a Disney Renaissance-style film being released by the studio every year for the next two decades after Tarzan, the last of that era, in 1999. On some level, I would enjoy it purely as a diehard fan, but it would almost become campy—a parody of itself if you will. As much as audiences loved the Disney Renaissance, we can also sense artistic malaise. The formula had gotten monotonous and stale—again, already by the midway point of its era—and audiences clearly reacted with their wallets.

Does that mean that the Revival era is doomed to repeat history? Surprisingly, it may be averting this fate because: although it certainly has resuscitated the Disney formula, there’s one telling factor that separates it from the Disney Renaissance—it’s not following the formula for every new film. To their credit and maybe by calculation, they’re not just doing princess stories or fairy tales exclusively. Maybe that’s part of its success: Big Hero 6 and Zootopia are some of the titles that are as divergent from fairy tales and princesses as you can get. Both met with clear commercial and critical acclaim—unlike the misfires of previous eras that also strayed from the formula.

Whether they realize it or not, perhaps this is what audiences need. We will always love and adore princesses and fairy tales, but there needs to be variety. There’s nothing wrong with having a studio trademark (family-friendly films, music, artistic visuals), but the trademark can be broad and expand. Art is about change, pushing boundaries, and expanding possibilities. Sure, we do like some degree of familiarity—all art has a core familiarity: a movie has a beginning, middle and end; music has notes and instruments, verses and choruses. But along with familiarity we need variety.

Perhaps Disney has a new, unprecedented confidence and competency that is allowing them to achieve something they weren’t quite able to do in the past: successfully tell classic stories and original new stories, concurrently. Disney may have failed at pursuing either one at various times in the past, not because either one was theoretically bad—but because they just weren’t truly creating something exceptional. As mentioned, they were going through the motions after a certain point during the Disney Renaissance, settling into a creative ennui—or alternately, striking into new territory with dubious artistic vision, during the Post Renaissance for example. But if a story is truly told well, it can potentially succeed. Audiences will respond to something special even if it defies current trends, as they did when The Little Mermaid reignited this medium that had virtually gone defunct for nearly two decades, kick-starting the Disney Renaissance in 1989.

Will Disney get it right this time? We can only wait and see.

Any long-running institution is going to experience inevitable peaks and valleys in relevance and vitality—hence the different eras of Disney feature animation that exist in the first place. I am resigned to the eventual fate of the storied Disney Renaissance of my youth, because to borrow a platitude: good things can’t last forever. Sitcoms come to an end. Book and film franchises end—and can even be revived again after a certain period of absence (sound familiar?). The much-beloved Disney Renaissance is all the more revered because it wasn’t permanent and was limited in duration. It lends it a rarity that further incites gratitude and veneration. It was beautifully fleeting, as all life is.

It’s almost as if the beginning and ending of each era was inevitable—because like all art, Disney feature animation is an evolving medium. The studio is learning their craft in real time, and we get to watch it unfold onscreen.

 

‘Mind’ Games Keep You Guessing

year

The Year I Lost My Mind certainly avoids current gay indie-film tropes, if not most cinematic tropes altogether, with its bizarre collection of idiosyncrasies.

On the surface, it’s a thriller about a troubled young man who dabbles in petty illegal activities, but its his particular tics and habits that amount to a tantalizing viewing experience, if for no other reason than to find out just what the hell is going on?

Tom is a pale, offbeat, lonely gay man in his early 20s, living at home with his mother and sister in Berlin. Our first introduction to him sets the tone: he dons a large horse mask, compelling his resigned mother to ask “Why do you enjoy having people be afraid of you?”

Her inquiry is apt. Things only get stranger from there as her moon-faced, taciturn son walks into stranger scenarios, often wearing a variety of more masks from his bountiful collection.

Tom soon breaks into a stranger’s apartment where the handsome tenant sleeps peacefully, unaware of the passive crime that hovers over him. Tom simply observes the unsuspecting young man, then leaves—making mental notes for some later transgression perhaps.

This leads to a low-grade stalking scenario, spread out over the course of the strange protagonist’s idle days, spying on his subject’s routine around town from a distance.

Through his increasingly disturbing habits, interests, and behavior, one gets the sense that Tom has not only been marginalized by mainstream society but by the gay subculture too, with his unconventional looks that preclude reciprocation when he’s witnessed making advances on other men.

Is this why he is acting out, morally and legally? And to what extent will it manifest?

A subplot unfolds, where Tom encounters a fellow masked man—larger, stranger, and more foreboding them him, at one of his haunts around town: the nearby woods where men cruise each other.

This stirs another question: is his doppelganger’s existence real or merely a figment of Tom’s demented imagination?

Tom revisits his previous subject’s apartment regularly, affirming his lascivious motives in the absent man’s empty bed. He skirts the calamity of being caught more than once, escaping through the glass doors of the patio. His subject begins to notice missing cookies, misplaced books—but he also has a cat, so the picture is hazy.

The inevitable occurs one night when Tom boldly admires the handsome man sleeping in the middle of the night, but he manages to retreat through the front door, buffered by the shock he’s cast over his newly lucid victim.

It’s through another chance that the victim puts two and two together, and he resolves the situation through his own hands—with unexpected results that are intentionally shocking by the filmmaker. Although compelling, it doesn’t feel quite believable enough to be effective.

With a fairly adroit buildup to this climax, it feels a bit of a cheat to only tumble into improbability. The subplot involving Tom’s frightening double is resolved in a more subdued manner, alleviating some of the discord. Nonetheless, the film is effective enough for everything that occurred before its finale—an interesting study in anomaly: its images, moods, and actions are sure to linger long after the screen fades to black.

‘Mid90s’, middle ground: lacking inspiration.

mid90s1

Mid90s proves that standard movie tropes are always familiar no matter how you dress them up. And first-time director Jonah Hill has certainly earned kudos for dressing his new film up to fit its epochal title: one only has to glimpse a few grainy frames (purposely shot on 16mm film for added effect), to be transported back into the last days before the millennium: compact discs, baggy clothes, big hair and of course a nostalgic soundtrack by a seminal voice of the era, Trent Reznor.

Although the title references an entire cultural zeitgeist, the film is far from being all-encompassing in scope or subject. Instead, it’s an insular story built on specificity, resting under a rather prosaic and vague title for lack of keener inspiration, which is its biggest flaw.

The story begins in Los Angeles during its titular time period, with a young preadolescent boy named Stevie. Hounded by his boorish older brother from the opposite end of the adolescent spectrum and given free rein by a lais·sez-faire mother suffering from arrested development, Stevie is primed for one of cinema’s biggest clichés: a summer he’ll never forget.

This leads into another hallmark of the period: the skateboarding underworld, when Stevie sets his sights on befriending a group of older boys at the local board shop.

As soon as he unremarkably worms his way into the affections of the boisterous but nonthreatening slackers, his story ticks off the requisite milestones of coming-of-age and its subgenre of films: exhilarating new experiences, wise mentors, chafing against his family, high jinks that just skirt the line of true danger and serious trouble.

Since the plot is standard framework, the question is if the parts make up for the sum. Stevie is competent enough as a protagonist: he fits the bill in looks and temperament, without hitting any false notes. The home life he shares with his threadbare family never truly generates a sense of urgency, which curbs any added weight to his arc. Stevie’s older brother and young mother aren’t guilty of anything beyond typical dysfunctional fare: physical taunts from the former and distractions by the latter. As for Stevie’s newfound entourage: they border on caricatures, with raunchy nicknames and slight characterizations that are as nuanced as a junior high yearbook.

 The film suddenly hits a climax that can only be described as inorganic and again, contrived—but this is in keeping with its steadily innocuous tone. Mid90s doesn’t seek to innovate or make a statement. It’s a light tale that never truly triumphs or fails abysmally either—inhabiting a safe middle ground of familiarity, evident all the more by its usage of epidemic-level nostalgia for a past era that’s bound to pique audience interest. It’s the only true star of the movie; without it, it would lose half of its distinction.

Nobody Walks in L.A.

ped2

L.A. has the worst pedestrians in the world—because we’re not used to them. It’s bad enough that it takes forever to drive a relatively short distance in this town due to traffic, but when you need to drive through an intersection and a person dares to walk across it first? It’s enough to make you curse the existence of humanity.

Sometimes it’s truly a test: on more than one occasion, I’ve been delayed by the truly physically impaired. Of course I empathize and wait patiently on those occasions, but those moments feel tailored to test the utmost limits of my character. It’s like halting an epic sneeze or cutting off a bowel movement midstream: the absolute urge to purge and the terror of following through with such a deplorable act calls for your every last nerve to reverse the impossible.

On one such occasion, I had to make a left turn from a moderately busy lane; a slew of cars rolled through in the opposite direction, deterring me. My receptors were already piqued because this traffic was a tad unusual for this area given it was an early Saturday evening. I scanned my target intersection, and saw two young men idling by on skateboards. They cleared before the train of cars did. Impatient, I began to eyeball the nearest traffic light up ahead that could clip this parade to my left. Then I saw it:

A disheveled, middle-aged man ambled arduously forward towards my designated cross street—on crutches. What’s more—in my periphery, I caught an aberration on one of his legs—yes, his right leg was amputated around the knee. Immediately, my mind jumped to do the math: at his laborious pace and with the yellow light imminent up ahead, he would reach the intersection just as the cars on my left cleared.

I wasn’t in a rush. I wasn’t even angry at him. I was just resolutely amused that this was happening. It felt so indicative of this city. Here I was, driving a car that still functioned well past its purported expectancy, with takeout on my passenger seat—no plans for the night, half a mile from home—and normally I would’ve flipped out at this pedestrian who dared to cross a public street in direct tandem to me turning into it, except that in this scenario the perpetrator was possibly a transient with clear physical limitations and little to no means by the looks of his tattered appearance.

If I had flipped the switch into full selfish pig mode at that very moment, even just privately in the confines of my car—I knew it still would’ve been a sin, in the eyes of my conscience and whatever god may exist. I could see an audience of my fellow human beings at that very moment as well, sneering and groaning at me if I were to recall the story on stage or if they were privy to it via a hidden surveillance camera—satisfied in their smugness that I was more terrible than they were, convinced that they would’ve felt nothing but angelic compassion in my position.

I drove home and lamented it all: the feckless logistics of this town, the cruel irony of fate, the snide hypocrisy of humans and my own presumptions about them—and my inability to resist being affected by all of this.

Interpersonal Skills: I can’t deal with people.

interpersonal special SIZE

I’ve come to the conclusion: I can’t deal with people.

Although by my mid-thirties I know life is a constant learning experience and that we can traverse the entire continuum of allegiances and viewpoints, I have gleaned from my social experiences thus far that I’m just not good at interpersonal skills.

First off, I’m poor at asserting myself. When faced with a scenario where a simple expression of my needs would suffice, I am often drowned out by the myriad connotations of the situation: who is involved, how much I love/fear/loathe/need them, what words or actions spurred the need to assert myself—and how it affects me emotionally.

Beyond that, I seem to lack the same interests, motives or needs that many people exhibit in socializing: I don’t crave status, dominance, or social gain through who I associate with.

As any experienced person knows, there are tacit “games” that people play with one another—through physical action, comments, rejection—to assert their needs and agenda in regards to others.

I’m not interested.

I’m not interested.

I can’t deal with people judging others based on what they look like, who they hang out with, what job they have.

I can’t deal with people who aggressively label me—thinking they “know me” but they really don’t, and when I inevitably prove them wrong they get mad at me, of course, because they’re upset that the world doesn’t fit their perception of it.

I can’t deal with people who put others down in order to build themselves up. I can’t deal with people who gleefully abuse others for this purpose—who have no qualms making an innocent human being miserable.

I can’t deal with people using others for personal gain, including those they had considered their friends and closest colleagues.

I don’t want to trade barbs with people, because on an instinctual level I don’t want to sink to that level. It disgusts and unnerves me to see myself behave that way. For many people, if I can’t do that—then I am simply a target for their deplorable behavior, and therefore I must avoid them for my own safety and self-respect.

Consequently, even if I possessed the fortitude to assert myself more effectively—my general distaste in our social mores and behaviors could possibly thwart me from ever engaging. I don’t want to correct people’s behavior towards me—not just because I’m incompetent, but because it offends and repulses me that I have to display certain traits to attain it.

It sounds like a cop-out, and in a way—it is. After all, life is all about doing things we don’t want to but are essential as a means to a healthy life that truly benefits us. Each day, we awake, wash and dress ourselves—that in of itself is a requisite for a healthy existence. The vast majority of us must work at an occupation to earn resources that will acquire us more resources.

Interpersonal skills are not as tangible as our bodies, food, water, and a roof over our heads—but they are just as vital for the social animal that we are.

This is where I clash. My principles seem to be at odds with the rudimentary mechanics of socializing.

It’s a shame, because what I lack in grit I make up for in other virtues: as a friend, I’ve been told that I’m fun, open-minded, tolerant, and unconventional. I challenge the norms of society for the greater good of seeing the world anew. I am loyal, kind, generous, and gracious. I am accepting and thoughtful most of the time. I am engaging, but also capable of great independence. I have clearly defined interests and opinions that define me and can serve others.

Look, I’m also not perfect either and can even be guilty of unsavory behavior towards others, but for the most part I believe in a higher state of coexistence. And this is another hindrance to my interactions with others.

At the risk of sounding hopelessly naïve or oblivious, I believe in a world where we tolerate our differences instead of persecuting each other for them. I believe in treating each other with decency and minimal respect, even if we differ in lifestyle, views or appearances. I believe in equality—that we are all inherently valuable therefore the need for stringent hierarchy or status is irrelevant. I believe that as long as a person is not harming anyone, they should be accepted as they are—not persecuted because of someone else’s expectations or ideology. I know this isn’t plausible in our world, but that is my core approach to life, and informs how I view and interact with others.

This is the reason why I feel separate from most people, and different.

I’ve realized this is the reason why I am often confounded when people invariably end up being… human.

It’s all too common for people, including those we’d entrusted ourselves, to lash out at one another—because of differing temperaments, beliefs, expectations, ideology, and needs.

At this age, I’ve experienced the disappointment of so-called “friends” who display less than stellar traits towards me, and handle me in a way that directly opposes basic decency and humanity.

I’ve only been able to count on a small handful of friends who haven’t eventually turned on me yet—and of that minority, many of them are simply not visible enough in my daily life to risk offending me.

This, I feel must be the resolution to my anomalous condition: to seek out and zero in on the rare peoples who will not see me as a target for their foibles and dire needs.

When I find such a commodity, I must treasure them and keep them in my life—because they will be my principal social outlet, because it appears that I am not capable of much more than that.

Will I ever find such rare exceptions? That’s the question.

The View vs. The Talk

viewtalk

I love watching women gab. As sexist as it sounds, I’ll just say it: they’re good at it. I imagine it’s the equivalent of people tuning in to watch physically fit men play sports. Also, if I really want to fully commit to being politically incorrect: maybe it’s part of my DNA as a gay man to enjoy hearing women yak about everything from the profound to the frivolous. I can relate, and it’s fun.

Since the beginning of this decade, we’ve had two major choices to see the biggest and brightest women in pop culture do just this, on daytime T.V. in the U.S.

Venerated journalist Barbara Walters set the precedent in 1997 with a show called “The View”—featuring ‘different women with different points of views’ sitting around a table and discussing the day’s biggest headlines. They ruled the roost as the lone standard for such a concept, until 2010 when former child actress Sara Gilbert had the sterling idea to do an offshoot of the format (with the angle that it’d consist of a panel of “moms”—although its predecessor never played down the maternal status that most of its panelists could claim too). As a viewer though, I wasn’t discerning—it made sense because: in a nation as large and diverse as ours, one of the benefits is how we can expand on commodities… like talk shows. After all, there have been multiple late night talk shows for decades now, competing directly with one another and thriving in their own right regardless of the saturated market. When a new daytime talk show featuring a panel of half a dozen women talking about topics in the news with their “different points of views” popped up, we took it in stride.

Both “The View” and “The Talk” have succeeded with viewers and been nominated for the same daytime Emmy awards throughout the years, solidifying their place in the pop culture lexicon.

But is there a difference or a clearly superior one?

“The View” has the advantage of experience on its side: thirteen more years over its rival. With that plethora of time, it’s seen and done many things it can learn from. Infamously, placing two panelists who are diametrically at odds with one another in perspective is ratings gold: when outspoken liberal Rosie O’Donnell was recruited as the show’s mediator in 2006 during the contentious Bush/Iraq War years, fate was written on the wall—she would ultimately come to blows with then-outspoken conservative panelist Elisabeth Hasselback the following year. It was the best daytime drama that needed no script.

The show also has the undeniable class factor that only a highly respected figure in the journalism field like Barbara Walters can provide. Although “The View”’s reputation has ebbed and flowed as any long-running entity is prone to, its pedigree is still rooted in solid stock.

It’s not without its trials. The show has “jumped the shark” as much as a talk show can do, in the sense of creative/production malaise. Since the 2010s, there has been a highly visible turnaround in the show’s panelists—it’s hard to even keep up with who’s officially on the roster these days, like watching your favorite sitcom characters getting replaced by new actors or new characters that you just don’t care for. Many of the new recruits were blatantly regrettable as well (Candice Cameron Bure and Raven Simone dragged down the credibility of the show, imho! Thankfully, their tenures were scant). The show has even rehired previously retired or exited co-hosts such as longtime favorite Joy Behar, Sherri Shepherd and even Rosie O’Donnell herself (who ultimately only stayed for one season again in 2014, mirroring her infamously clipped first round).

“The Talk” also tinkered with its lineup initially after its debut season, which is to be expected of a fledgling show though. It found its footing with a consistent lineup afterwards, and has only had one panelist replacement since.

Another difference with “The Talk” is its less emphasis on formality. The show humors its audience and viewers by directly asking them questions after bringing up a headline—from a serious news story to celebrity gossip, mediator Julie Chen will offer a concluding missive to encourage monosyllabic responses, boos, hisses, or laughter from the live audience reminiscent of, well, a daytime talk show (a 1990s version moreso, though).

Since the show is filmed in Los Angeles, another distinction from its New York City predecessor, it also has a daily celebrity-themed guest correspondent who contributes a pop culture headline (adding to the inevitable pop culture news that permeate the show anyway), in a segment loosely dubbed “Today’s Top Talker”.

As one can guess, “The View” and its reputation skews more towards a serious, politically-themed show. Although its current longtime mediator Whoopi Goldberg is a veteran Hollywood actress, she is outspokenly political and even good-naturedly mocks the more frivolous pop culture news she’s required to broach regularly (read: reality show fodder).

Other panelists, regardless of how short their tenures have been in recent years, have frequently been renowned political pundits as well, something “The Talk” has steered from completely. Currently, Senator John McCain’s daughter Meghan McCain is the resident conservative Republican on “The View”.

“The View” has also expanded its most well-known segment, the roundtable discussion deemed “Hot Topics” from just a third of the show’s running time to half or more now, betting on the attention-grabbing headlines and the often heated exchanges between the ladies on the panel to sustain viewers.

Both shows have the requisite celebrity guest interview in the latter half of the show. Again, “The View”, naturally more political, regularly invites political figures such as former president Barack Obama and several political commentators. “The Talk” relies entirely on celebrity guests, occasionally some that are not even major draws. This is moot, since I only tune in to each show to watch the ladies yak amongst themselves in their roundtable segments.

Judging each show based on my proclivities, I do have a clear conclusion of which one succeeds most. “The View” tides me over, for the aforementioned reasons above—it has more legitimacy but is still able to delve into melodrama, camp, and frivolity. Although its high turnover rate is unnerving and dispiriting, it has enough mainstay power players to anchor it. As a child of the 1980s and 1990s, I have a bias for Whoopi Goldberg as a pop culture fixture. Comedian Joy Behar’s sassy Italian schtick hasn’t gotten old—or perhaps, twenty-one years later on the show, I’ve also grown attached to her presence. As for the rest of the current panelists, I feel neither strongly for or against them. Sara is the bright blonde who keeps things light or at least centered; Sunny adds more diversity and a touch of primness. Meghan obviously serves as an antidote to the clear liberal slant from the two veterans of the show, and for the most part I enjoy her dynamic. Not to paint her as an archetype, but I love a good “nemesis”, and Meghan is one by default, constantly having to defend her political party whenever President Trump drags it through the mud, which is often.

“The Talk” is sufficient enough, but my taste doesn’t quite extend to audience participation and an overabundance of pop culture fluff. And although they currently have the steadiest panel lineup longevity, I’m not especially fond of any of the panelists: mediator Julie Chen is too proper; Sara Gilbert is insightful but staid as well; Sharon is the venerable one who’s been around the block—but is a bit too mannered and biased in her outspokenness; newcomer Eve hasn’t proven her worth yet beyond tugging the median age of the group down more; and Sheryl Underwood plays up the sassy black woman trope a bit too much.

Each show brings something to the table, and it’s merely a matter of taste. To me, I primarily blur the edges that separate the shows. They’re like two sitcoms that have an overlap of similarities and differences, and I like them both for different and similar reasons.

City of Broken Dreams

wonder

I volunteer at the local gay center occasionally. It’s located in the heart of Hollywood—on Santa Monica Boulevard, just off of Highland. If you go a bit further north on Highland, you’ll hit Hollywood Boulevard next to the Kodak Theater where they used to hold the Academy Awards.

I don’t live too far away, geographically, but as with everything in L.A. it’s cultural disparity that separates us, not distance. Driving up from my nondescript, low-key neighborhood of West L.A. adjacent to Beverlywood, I’m essentially wading into the gritty, smoggy, unfamiliar waters of Hollywood when I venture there. More discerning people would have ardent reservations even going there, barring an absolute emergency or valid necessity. Geographic prejudice is just one of the many charming traits of Angelenos you’ll discover here. I’m certain many of them take gleeful pride in it, much as they would a fine set of hair or an official job title.

One Monday morning, I gamely made the commute to do some filing for an upcoming event at the Gay Center. It was pleasant—getting out of my routine to help out with a good cause, while brushing shoulders with people I otherwise would never encounter on my own. The free pizza and cookies were just a bonus.

Halfway into my shift, I had to move my car to avoid parking regulations. Walking amidst the nearby adjacent residential neighborhood, I got into my vehicle and circled around onto Highland Avenue and parked, then trekked back to the Center. This unremarkable act evoked volumes to the intensity of this city and its continuing unfamiliarity to me.

Within such close proximity to the Gay Center, several of its constituents were milling about in surplus: an African American transgender woman strutted down Highland Avenue, bemoaning the heat under her breath. A pair of young gay men, stylishly dressed, sauntered northward on the street. A lone gay man in his late thirties to early forties, glanced at me curiously as I reached the crosswalk.

The street glowed under the unseasonable heat for late October—all concrete, metal, and glass—cars and casually dressed denizens moving forward with purpose. Businesses held shelter like virtue.

Back at the Center, a middle-aged man and woman danced and frolicked to music from a boombox while a small, hairy dog looked onward at their side. Their diligence seemed to equate with rehearsing for an imminent performance in the future. They paid me no mind, and I didn’t with them.

It was at that moment that I tied everything together. I realized that I no longer possessed a sense of wonder that is synonymous with youth. Not too long ago, I would have been tickled with simple amusement at the sight of this quirky couple and their canine cohort. I would have mused over their arbitrary efforts and location—the myriad possibilities of their intentions and origins.

I would have felt joy at watching the nearby city streets emitting their own special music, new to my ears as a visitor. The pedestrians and storefronts would have told stories that I knew would continue on without my witness—the mystery of it all intriguing me.

I would’ve felt this like a child on a Saturday morning: plain reverence at the beauty of life and all it had to offer on one special day. Now? I’d woken up on a new day, and didn’t recognize what I saw in the mirror anymore. Or I did—I looked just like the hardened cynics who had scoffed at me whenever I expressed unmitigated wonder in this city.

I realized: there was no sense of wonder for me anymore, because there was nothing new for me to see in this city. I knew the end of each story now, or rather: I knew where I belonged in the context of each one. I knew what to expect. I’d been trying in vain to make a connection in this fractured city, to no avail. What did that tell me?

Without ambiguity, there is no need to be curious anymore. This is why people settle down and stop exploring. It isn’t necessarily a choice. It’s an acceptance of who you are and how you are received in this world. I was just holding out on it for much longer than most.