The key lesson learned over the past 10 years? Recently named by Forbes in their 40 under 40 list , Julia Hartz is co-founder and new CEO of Eventbrite, a billion dollar company which has successfully disrupted the ticketing industry.
They built a platform to provide a delightful experience for both ticket sellers and buyers. As of , Eventbrite has ticketed over 2. Hartz and her co-founders envisioned to democratize ticketing when they started Eventbrite 10 years ago. Back in , the ticketing industry did not always bring a warm fuzzy feeling to consumers.
High fees, bad experiences when trying to get a ticket, and overall lack of innovation meant that ticketing was ripe for disruption. With the right technology and platform in place, Eventbrite has successfully created a better experience for both ticket sellers and buyers. Today, the company has gone beyond the life-cycle of ticketing. Eventbrite has also been voted no less than 7 times as the best company to work for in San Francisco.
Growing a company is always a challenge because one cannot predict what will happen to the culture. Prior to that, Issie worked as a staff writer for Inc.
She has also worked as an on-air contributor for CBS News and taught a graduate-level course at New York University's Center for Publishing on how tech giants have affected publishing. At least, don't use the word "bias" or the word "discrimination," or any of those pesky terms that have a funny way of landing companies in court. That's according to an internal document circulated inside Facebook last fall called, "How to talk about fairness.
Do you use terms like algorithmic bias , discrimination , and disparate impact without fully understanding their meaning? If yes, this note is for you," it reads. The note, published in full below, is part of disclosures made to the SEC and provided to Congress in redacted form by whistleblower Frances Haugen's legal counsel.
In a statement to Protocol, a spokesperson for Meta, Facebook's new name, said, "In an effort to streamline how our teams approach and discuss topics related to fairness, which is an important component of how we build our products and services, we commonly share resources and guidance, like this note from our Responsible AI team last year, across the company. The note was posted around the time a reporter for MIT Tech Review was working on an extensive reporting project about the Responsible AI team, with Facebook's cooperation.
It was also a little more than a year after Facebook settled a lawsuit with civil rights groups over how its platform enabled advertisers to discriminate in housing, job and financial services ads. The Department of Housing and Urban Development also filed a similar suit against Facebook just as the first suit was being settled.
Facebook has since faced ongoing accusations of bias and discrimination on the platform, particularly with regard to advertising. Just this week, the company said it would prohibit advertisers from targeting users based on their Facebook engagement with "sensitive" topics, including religion, health, politics and more. It was in this environment that Facebook circulated its "guidelines for communicating about fairness analyses" last November. Under those guidelines, Facebook employees were instructed to "avoid legal terminology, including specific concepts like discrimination , disparate treatment and disparate impact , and more general terms such as illegal or violates.
These terms, the note's author wrote, "are legal terms with specific meanings and are therefore out of scope for what can be learned through fairness analysis. The guidelines instructed employees instead to opt for terms like "fairness," "inclusivity" and "relevant groups" when describing their work, while acknowledging that "fairness" is a "context-specific term with no singular definition. Rather than talking about, simply, bias, the guidelines also encouraged Facebook staff to talk about "implementation bias," "model bias" and "label bias," all of which refer to imbalances in how Facebook's AI systems were built, rather than actual discrimination that results from those systems.
The document warns employees that they should get legal approval before making "absolute statements regarding the existence of unfairness or bias or discrimination or appropriate measurement or mitigation strategies.
The guidelines illustrate how Facebook has sought to both study and mitigate algorithmic bias, while also avoiding incriminating itself with those findings.
They also show how deeply strategic Facebook's decision to talk about these issues was, months before the company ultimately debuted some of its fairness tools in March.
According to other documents in Haugen's disclosures, the guidelines came along at a time when Facebook was trying to reclaim the public narrative about algorithmic bias on the platform. Another internal document, dated just weeks before the guidelines were posted, discusses how Google, Microsoft and Amazon have publicly discussed their own efforts to make their platforms more fair.
In comparison, that document points out, Facebook had "no online presence concerning AI Fairness" at the time. How to Talk About Fairness by Protocol.
The second of two leaders from NYU's AI Now Institute, a small but influential organization researching the social implications of artificial intelligence, just joined the Biden administration to lay the groundwork for government AI policy.
Their previous work suggests their presence might encourage the government to require new transparency from tech companies about how their algorithms work. The Federal Trade Commission earlier this month created an entirely new role for AI Now co-founder Meredith Whittaker, who will serve as senior adviser on AI for an agency where tech staff has been in flux despite a mission to get tougher on tech.
AI Now alumna Rashida Richardson — a law professor who served as director of policy research for the group and has a background studying the impact of AI systems like predictive policing tools — joined the White House Office of Science and Technology Policy in July as senior policy adviser for data and democracy.
Whittaker, who once led product and engineering teams at Google and founded the company's Open Research Group, made headlines in for helping guide worker walkouts and fighting use of Google's AI technology by the Pentagon.
But it's her work at AI Now crafting practical AI policies intended to prevent encoded bias and discrimination against people that is likely to have the most relevance in her new FTC role. Merve Hickok, senior research director of the Center for AI and Digital Policy, a group that evaluates national and international policy work on AI, said the inclusion of Whittaker and Richardson in the administration aligns with the mission of OSTP leaders to clarify the rights and freedoms people should be afforded in relation to data-driven technologies.
AI Now and Whittaker declined to comment for this story, but an AI Now spokesperson said Whittaker will remain involved with the organization. Richardson and the OSTP did not respond to request to comment. As the administration attempts to keep regulatory pace with a rapidly advancing AI tech industry, it's a little too early to know whether Whittaker and Richardson's policy goals will sync with those of their respective agencies. However, with Whittaker filling an entirely new seat and Richardson working under a director at OSTP whose position has for the first time been elevated to the Cabinet level , their voices could carry.
For one thing, Whittaker's work could bolster the FTC's efforts to intertwine data privacy and antitrust considerations in cases against tech firms. Whittaker has argued that AI advancements have been largely facilitated by a few dominant tech giants that have the resources to suck up massive amounts of data and spin it into algorithmic systems because of ad-driven business models, a common refrain of FTC Chairwoman Lina Khan.
Whittaker told lawmakers at a U. House Committee on Science, Space and Technology hearing in that the massive amounts of data and vast computational resources fueling the AI boom "are assets that only a handful of major tech companies have, and very few others do. Whittaker and Richardson's work at AI Now proposing policies for regulating algorithmic systems — from commercial voice and facial recognition tech to automated Medicaid benefit allocation tools — offers some big clues for what they might want to push at their respective agencies.
In general, they have suggested specific steps they'd like to see implemented by the government that could force more transparency around AI, something many lawmakers demand from big tech firms like Facebook, Google, Amazon and Twitter, as well as smaller companies. In testimonies given at separate congressional hearings addressing AI, Whittaker and Richardson called for tech firms to waive trade secrecy claims that block government entities and the public from accessing information about their systems.
They also wanted lawmakers to require that companies disclose the names and vendors of AI they use to make decisions that affect people. At the center of the AI Now proposals highlighted by both Whittaker and Richardson is the algorithmic impact assessment , a framework for evaluating the effects of algorithmic and AI systems. It's a concept that has its foundation in more widely used environmental, human rights and privacy impact assessments.
The group spotlighted algorithmic impact assessments in comments on "Competition and Consumer Protection in the 21st Century" submitted to the FTC in , noting the evaluation process would "provide essential information for FTC investigations into potential deception, unfair business practices, or other violations of consumer rights.
VanDruff said we may see Whittaker's influence manifest if she weighs in on new or updated FTC rules or in a case against a company.
Even before Khan was named chair, the commission had begun taking a more aggressive stance on algorithmic technologies. In separate cases, it forced photo app Everalbum and Cambridge Analytica to destroy data garnered through allegedly deceptive means as well as the algorithms built from that data.
The agency in April reminded observers of its consumer-protection work in the AI arena, including issuing guidance for businesses using automated decision systems to determine credit scoring or home-loan decisions.
Fresh off a blockbuster IPO that saw his expense management company's value soar by a billion dollars, Expensify CEO David Barrett said small businesses are the core focus. Benjamin Pimentel benpimentel covers fintech from San Francisco. He can be reached at bpimentel protocol.
For CEO and founder David Barrett, the successful debut capped a hectic week which had him performing a task he hasn't had to do in a long time: wooing investors for the company he founded 13 years ago. We haven't raised money in like, six years or something. So we don't really talk to new investors, or at least haven't in a very long time. But the novel task gave Barrett an opportunity to explain the company's main focus: the small and medium-sized business market.
Investors don't always get it, he said. That's cute and all. But when are you going to start thinking about the real enterprise? Expensify does offer its expense management tools to big corporations, but the Portland, Oregon-based company sees SMBs as its core market. Barrett explained why in an interview with Protocol. He also talked about Expensify's game plan, his views of the expense management software market and how the company has adapted to the pandemic. Barrett, who became famous for scathing criticisms of the Trump administration , also shared his views on President Biden's performance.
It's pretty hectic basically talking to every hedge fund and investor in the world in the past week or so. We're kind of out of practice, honestly. We don't really raise money. It's been fun to get back out there and talk with people and share the vision and hear people like, "Wow, you guys have been busy. I don't know if there's a particularly hard question to answer after you've answered the same questions 50 times in a row. Maybe one of the most persistent questions — though we didn't get this as much as I expected — more often than not, people are like, "OK, so you're this payments super app in SMB.
What makes Expensify special, fundamentally, is that we have a completely different business model. Everyone else in our industry has a top-down acquisition model. They've got a sales team calling into the CFO or whatever. And that model works fine, but it only works in a tiny corner of the marketplace and it's the same market that everyone else was going after.
Our competition is email and Excel. It's like a manila envelope stuffed full of receipts that is the actual competition. And no one is defending it. Our competitors use the same business model. You buy a list of CFOs and then you call that list from top to bottom and then you put them through a qualification [process].
They're all calling the same people off the same list with the same message, selling the same product. And, shocker, it's really hard to compete when you're exactly the same as everyone else. Our approach is starting with the employee, and then they pull us into the company. The bulk of our revenue is subscription revenue that comes from companies between, let's say, 10 and employees. There's a view that B2C fintech has become increasingly hard, and a B2B approach is more cost-effective.
I love that everyone thinks that because that's why they're all failing while we thrive. If you try to apply an enterprise business model in the SMB, those are really different markets. The economics of top-down acquisition just do not scale well outside of the mid-market. Nashville is Eventbrite's second-largest office behind the company's headquarters. An Eventbrite spokesperson declined to say how many workers were laid off in Nashville:. Protocol reported that several employees who attended the meeting said Hartz was emotional as she gave the news to staff around the world via video call.
Employees were notified after the call if they had lost their jobs.
0コメント