May 17, 2025

DNS Africa Resource Center

..sharing knowledge.

Trouble in the US Non-Academic Job Market and What It Means for Philosophers (guest post) – Daily Nous


Begin typing your search above and press return to search. Press Esc to cancel.
news for & about the philosophy profession
“I’m here to sound a warning bell for those who are now in the position I was several years ago.”
In the following guest post, Noah Gordon, who earned his PhD in philosophy from the University of Southern California, shares his experiences on the non-academic job market, thoughts on our current economic and technological situation, and advice for those considering leaving academia.
There are lessons in his reflections, as well, for philosophy departments interested in supporting the pursuit of non-academic work.
Disclaimer: This piece is based primarily on my own experience on the US non-academic job market, my evaluation of news related to this market, and my evaluation of many other anecdotal reports.[1] While I have tried to assess my claims in light of rigorous, reliable studies and data, and sometimes reference such work below, my attempts to do so have revealed that such resources are lacking. My sense is that radical changes in this job market have been occurring only fairly recently (in the last few years), are currently diffuse (highly dependent on specific roles and industries), and that it will take some time before these changes are captured by the slow process of economic measurement in academic journals, government reports, and so on. I am not an economist or a labor theorist, and I strongly encourage you to do your own research and analysis before relying heavily on the conclusions and analysis here.
I finished my PhD in philosophy at USC in May 2023. I considered it a success: I contributed to the philosophical dialogue with some publications and presentations, met some good people, and got to spend years studying some fascinating esoteric topics. I did not go on the academic job market. Instead, I had a plan to spend some months preparing before starting applications to non-academic, white-collar jobs in the US. Primarily, this meant doing research on relevant industries and positions, brushing up on and obtaining online certifications for relevant technical skills, doing personal projects and freelance work that would be attractive to employers, and updating and polishing job app materials, like a personal website, LinkedIn profile, and generic resumes. I had a sense of what types of positions I wanted to search for: primarily something like ‘Technical Writer’ or perhaps that strange-sounding title ‘Ontologist’, which frustratingly goes by way too many different names on online job boards (you’ll also see titles like ‘Taxonomist’, ‘Semantic Data Modeler’, ‘Linguistic Engineer’, ‘Knowledge Graph Manager’, and many other variations). I had some loose geographic preferences within the US: West or East coast, if not remote.
Although I was clearly a bit unfocused, I felt that my chances of securing some or other decent job in the specified market were pretty good. I had some white-collar work experience outside the confines of academia, unlike many other philosophy grad students. I had an undergraduate major outside of philosophy in a relevant field to the positions I was interested in, specifically, in IT. And I had (and this is the tasteless part where I brag about my academic credentials), all the academic insignia which are sometimes purported to be valuable on the non-ac market, including a high GPA and a higher degree from a fancy name-brand university (though not an Ivy-league one).
The results of my on and off applications over the past 1.5 odd years are a hodgepodge of unstable and generally lowly-compensated freelance and contract positions with poor working conditions. Even initial phone-screens have been few and far between over hundreds of applications, and that’s before you even get to the many layers of interviews white-collar positions in the US now require (generally, after an initial recruiter phone screen, you will have at least one interview with HR and one with the Hiring Manager, however this is a minimum and often there are many more rounds of interviewing).
While I would love to complain more about my particular circumstances, that’s not the point of this piece (it’s just the hook). Instead, I’m going to give my analysis of the broader conditions behind my experience and some things those within philosophy with an eye on the non-ac market (including current and prospective grad students and undergraduate philosophy majors) should be thinking about. Especially if you are a current philosophy graduate student thinking about the non-ac market, even as a plan B, I strongly encourage you to think about these issues carefully. Academic philosophers are, bless their hearts, mostly blissfully unaware of the non-ac job market, and many of them will only have experiences to share about previous students who went non-ac. However, this particular configuration of troubling conditions on the non-ac job market are quite recent (post-COVID, especially 2022 and later), so those experiences may no longer be representative. In short, you probably won’t hear it from them and you should take any older experiences with a grain of salt. I’m here to sound a warning bell for those who are now in the position I was several years ago.
The job market in tech in the US is undergoing a major contraction. From a high-point in 2022, job-listings on major job boards for positions like Software Developer and IT Operations are way down:

Source

Source
Meanwhile, mass layoffs by tech companies began in earnest late in 2022 and are ongoing. Some commentators are, or at least were, convinced that this is merely a correction due to some combination of tech “overhiring” during the pandemic, copycat behavior between businesses, and some very general macroeconomic conditions around high inflation and low consumer demand. I see little evidence to think that this is the case, and stronger evidence that this is not the case in the excellent economic performance and profitability of these companies throughout the relevant period. As one representative example, Meta recently laid-off 5% of its workforce despite reporting large increases of 48% in income and 59% in profit in 2024.[2] In other words, even taking the premises about the macroeconomic conditions at face-value, “overhiring” does not seem to have hurt the performance of these companies in a way that explains the layoffs.
I believe that instead the tech companies are direct witnesses to, and early adopters of, three concerning labor trends that predate COVID, but have been greatly exacerbated and enabled by it. The tech companies, I believe, are going to function as test cases for slower, more risk-averse, and less technologically competent industries. If the success seen in the tech industry continues and these trends intensify as I expect them to, you can expect the situation in the tech industry to spread to many other sectors in the coming months and years.
US companies have experimented with offshoring white-collar work to cheaper labor markets for decades. My understanding of the history of offshoring for white-collar work is that its success has been mixed, and that it largely has not had noticeable negative effects on the US white-collar market.[3]
However, things seem to be different post-COVID.[4] The pandemic effectively functioned as a nation-wide accelerator and test case for making many white-collar positions fully remote. The infrastructure for doing all sorts of positions remotely was developed rapidly, and employers and employees alike became familiar and competent with relevant processes like replacing in-person meetings with video calls.
Many employers found that a good number of white-collar positions could be done at a level that was at least close to the performance they saw for in-person work.[5] The same tech companies that are laying off thousands of workers are increasing their hiring in cheaper labor markets like India and Mexico. The inevitable reasoning becomes “If this work can be done fully remotely from the US, why can’t it be done from Krakow or Lima? We could cut our salaries immensely and still offer top-market rates in those areas.”
These offshoring efforts are now more likely to succeed because of the increasingly mature infrastructure for remote work, and also the increasingly skilled labor pool in cheaper markets. Of course, over the long run economic logic dictates that things should even out in a globalized labor market, but there is a lot of pain coming for those in current high-cost labor markets in the meanwhile.
The tech businesses that are downsizing are the very same ones currently building frontier AI systems explicitly designed to automate a massive amount of white-collar work. This could just be a coincidence, but I suspect that the front-row seat decision makers within these businesses have to the progress of AI development has given them a sense that they will soon, if not already, be able to do more with fewer human workers.
The effects of AI automation on labor are a subject of massive debate amongst economists and others. The economic consensus appears to be that automation has, in the past, led to the creation of new types of work that have made up for the loss of previous human work. It remains to be seen whether this will also be true given the fundamental differences between AI and previous technologies.
What is increasingly less a matter for reasonable disagreement is that AI systems that can perform the cognitive component of most knowledge work at or above the level of human experts are indeed coming in the near future (read: within the working careers of anyone under 40). Philosophers should not stick their heads in the sand on this point. I was much more of an AI skeptic just 6 months ago, so let me share some of what has changed my mind.
Though I still think at the moment of writing a reasonable case can be made for a more pessimistic view on the timeline towards such AI systems[8], this case is rapidly becoming less plausible. And let me be clear: if your view on AI progress is informed primarily by interacting with older AI systems, talking points about “stochastic parrots”, or funny pictures of AI systems being unable to count the number of times ‘r’ appears in ‘strawberry’, then your views are out of date and epistemically unjustified.
Even if you are not convinced by my brief case for optimism about progress towards AGI, you should bear in mind the following points. First, the effect of AI on the job market in the near-term depends more on employer perception of AI progress than the reality of it. And second, AI progress will be uneven and even if some areas of white-collar work remain resistant to AI automation, others are quickly falling. Later I will give some reasons to think that the areas quickest to fall will be the ones that philosophers are most qualified to work in.
Academics are aware of the increasing use of adjunct labor on the academic job market. In parallel, labor theorists talk about the rise of the “gig economy”. The proportion of work done outside the confines of a traditional employer-employee relationship in the US is increasing. In the traditional model, you work full-time for just one employer. Your position has no set end date, and it is reasonable to expect in many cases that your employment situation will be fairly stable. Your employer provides health insurance and other benefits like retirement contributions.
By contrast, in these alternative work arrangements, your situation is far more precarious. Worryingly, it seems that, particularly post-COVID, many positions in the US white-collar market that used to be secure full-time positions are being converted to these alternative arrangements. This is particularly noticeable for entry-level positions of the kind that philosophers entering the non-ac market would be competitive for (you are fooling yourself if you think that your PhD in philosophy will vault you into the hallowed halls of middle management). Let me highlight some of the issues you are likely to face in two of these alternative arrangements:
There is good reason to think that, within the relevant market, the proportion of work done using these alternative arrangements will continue to increase post-COVID.[9] The reason is that the same infrastructure built up to support remote work also provides the needed basis for more easily converting regular positions to alternative work arrangements.[10] If it can be remotely by one person, there is a decent chance it can be chopped up into tiny pieces and done by dozens of online workers that you don’t need to give health insurance to.[11]
Hopefully you manage to cobble together enough meaningful freelance and contract experience to become competitive for regular positions. But either way, there is a good chance you may spend years on the job market dealing with these conditions first. One takeaway: you should not be confident that you will avoid years of adjuncting merely by going non-ac.
I will now pay homage to my philosophical training by anticipating an objection. However, I will dishonor this training by considering not the strongest objection, but rather the one I think will most quickly pop into the average reader’s head.
Objection: The overall unemployment number is near a 20-year low! Surely things aren’t as dire as you are implying.
There are many reasons why this is a bad objection. Let me highlight two of the most important.
Worryingly, there is reason to suspect that the particular skillset of the philosopher renders us especially vulnerable to these trends.
Sadly, it seems to me that someone coming onto the non-ac market with nothing but a BA or PhD in philosophy is among those most vulnerable to the deadly trio of offshoring, automation, and gigification.
So what’s an enterprising philosopher to do? Let’s assume you are sticking to white-collar work and staying within the US. Here are your options as I see them.
Getting a decent non-ac job as a philosopher using the traditional playbook (e.g. marketing one’s experience writing a dissertation as “project management”, or attending a short coding bootcamp) is still certainly not impossible, and there are many cases of philosophers entering this market for the first time even post-COVID and succeeding. The good news is that compared to the academic market, the size of the non-ac market is such that you’ve got practically unlimited chances, and the only costs are your time, energy, and sanity. If you opt for this route, I strongly recommend investing heavily in networking, or ‘nepotism’ as it is spelled in some dialects, which is undoubtedly the highest value currency in any job market.
Every philosopher has been warned up and down about the horrors of the academic job market, and mostly rightfully so. In the nearterm, it is only getting worse due to the debacle with federal funding. But there are at least two good reasons to think twice about abandoning the academic market if the situation on the non-ac market is as I have described it.
First, you may be overestimating your prospects of getting a position on the non-ac market that actually satisfies the reasons you don’t want to go on the academic market. In my case, for example, two of the biggest reasons were not being maximally geographically flexible and disliking teaching. However, the difficulties I’ve experienced have made me far less confident that I will be able to stick to my geographic preferences and get work that aligns better with my preferences in an increasingly tight US non-ac job market.
Second, academic philosophy is actually somewhat insulated from the market forces I have described. My hypothesis is that the primary economic value of the academic philosopher in the future will be as something like a social set piece. Philosophical research, to be clear, is strongly under the gun for automation[13] and offshoring, but this research has little economic value except to the publishing companies, who do not pay philosophers’ salaries. In other words, no academic administrator would react to the news that Philosophical Studies is filled with LLM-generated papers and that AI is now better than humans at coming up with counterexamples to the KK principle by deciding “well, I guess we can now layoff the philosophy department”. Philosophers will largely be left to their own devices to figure out how to deal with the automation of philosophical research. The real economic value of philosophers for universities, of course, lies in students wanting to take philosophy courses. And even though there is early evidence that AI systems are already remarkably effective as teachers, I don’t believe that this will greatly reduce the demand for philosophy courses run by humans. Most students are not going to want to sit at home on a computer with an AI-based education system overseen remotely by a few lucky philosophers in Bangladesh. They will want to come onto a campus and sit in a stately looking room with other students where an eccentric person who rambles on about justice makes them feel like they are in Dead Poets Society. At least, that’s my guess.
The broad factors I have identified will not affect all white-collar positions equally. Jobs in the national defense industry, for example, will certainly not be offshored (though they might be automated). Unfortunately, I suspect many of these havens will require qualifications beyond a PhD or BA in philosophy to be competitive for, and if they do not, they may be overrun by people from other sectors anyway. Law, for example, seems to be a promising possible haven: the requirement of a US law license for many legal actions makes automation, offshoring, and gigification less likely, and the legal industry still has a strong culture of recruiting true entry level positions from law schools. However, if this prediction is incorrect, you may find yourself tens or hundreds of thousands of dollars in debt and a few years older but back in the same situation.
In terms of what to do positively speaking, then, I have little concrete advice other than to take the non-ac market very seriously if it is at all on your horizons. Try to do internships, freelance work, or other non-ac work during the summers, even if this cuts into your research or dissertation time. Aggressively and shamelessly network.
What not to do is a little easier to say. Do not:
Good luck out there.
[1] If you want to scare yourself with an endless supply of these anecdotal reports, I recommend browsing reddit.com/r/recruitinghell, while bearing in mind that there is a strong selection effect with those that post experiences there.
[2] Predictably but infuriatingly, top Meta executives responsible for this layoff nearly simultaneously increased their own salary bonuses by millions of dollars.
[3] Broecke 2024 has a good discussion of the history of offshoring and its potential directions post-COVID.
[4] Baranes 2025 appears to contain a discussion of the issue, but I haven’t been able to access the text.
[5] A relevant piece of counterevidence is the widespread partial RTO (return-to-office) mandates at US companies, generally at 3 days per week. My expectation, however, is that these are due to factors that will not persist. Companies may reason that they don’t plan to take the radical step of immediately cancelling all office leases and selling off their physical infrastructure, and so “we may as well as get them in instead of having the buildings sit empty”. Decision-makers may also prefer to have people RTO for non-monetary reasons, such as preferring the environment of a teeming corporate office. Such reasons may not hold sway for long. Overall however, I recognize that current evidence on the effectiveness of in-person vs remote work for the relevant positions appears mixed.
[6] The highest-quality recent survey of the relevant experts that I am aware of was done at the end of 2023. It found an aggregate prediction of a 50% chance of such AI systems existing by 2047. While this might seem like weak evidence for my claim, bear in mind the following: (a) the trend of these predictions is towards shorter timelines: in the same survey in 2022, the 50% figure was at 2060; and (b) in my view, incredible AI progress has been made since late 2023 and an updated survey would reflect this.
[7] There are some outstanding concerns about this particular result. For a good discussion, see here.
[8] For a recent well-informed counter-perspective on this topic, see here. For the record, I share the intuition of some AI researchers that current AI systems appear “brittle” in a way that indicates a lack of some fundamental component of human intelligence. However, my expectation is that overcoming this will not require some fundamental change in current AI architectures, that there are currently research directions in AI that seem promising for this issue, and that the vast sums of energy, attention, and money being thrown at AI development will be sufficient to fix the issue in a timely manner.
[9] See ch. 2 and ch. 8 in Countouris et al 2023 for discussion.
[10] Woodcock and Graham 2020 (ch. 1) give at least three factors contributing to the rise of alternative work arrangements that are strengthened by this phenomena: platform infrastructure, mass connectivity and cheap technology, and digital legibility of work.
[11] This reasoning doesn’t explain why contract positions have increased or will continue to increase. I am less confident in that prediction.
[12] There are also other oddities with this measure, such as not including people who’ve given up trying to find work within the number of unemployed. To be fair, this population seems to be fairly small.
[13] Any philosopher who believes that there is something distinct about philosophical inquiry that makes it uniquely immune to AI systems, such that we might live in a world where adjacent areas like math and legal research are automated but philosophy remains a stubborn holdout is, I believe, deluding themselves.
Related: Non-Academic Hires




This is a convincing, sobering piece. To add to the reasons for pessimism, I’d suggest that even in industries protected by regulation like law, it may still be that there will be fewer entry level roles, as work that in the past would have been done by a partner managing a team of junior associates can now be done by a partner working with AI. Some initial reports that suggest law firms may be moving in that direction:
https://www.abajournal.com/web/article/law-firms-reduced-the-pace-of-associate-hiring-shifting-to-new-talent-model-report-says
https://www.artificiallawyer.com/2025/01/09/will-law-firms-need-fewer-junior-associates/
I really don’t know what advice I’d give philosophers looking for alt-ac work right now. (Not that I would expect to be in a position to give good advice–I’ve never been on the alt-ac market.) I am thinking about what sort of labor market my children will face. My oldest child is really into chemistry, and I suppose I’m hoping that hands on science and engineering will be relatively insulated from the trends the OP discusses; my guess is that we’ll still need to run experiments, those experiments will still need to occur in places with high tech lab equipment, and they’ll still require human oversight, troubleshooting, and adaptation to unexpected outcomes that would be hard to automate. Even being able to recognize and describe unexpected experimental outcomes–in a way that you could then ask an AI for advice in handling them–will, I expect, require a lot of training. But even if all that’s right, it’s not particularly helpful for philosophers.
I want to push back against this, because I know working lawyers. According to them, AI is functionally useless. It reliably produces bad and inaccurate law advice, which is espeically dangerous when the stakes are high.
The growing use of AI has actually created more work for junior partners, since other, lazier lawyers or people “representing themselves” submit documents written by AI that are incoherent or obviously wrong, so someone has to waste painstaking hours submitting responses that point out obvious mistakes. My brother, himself a lawyer, illustrated this to me by asking Google AI an incredibly simple legal question, only for it to give the wrong answer (it said every employer in Ontario was entitled to the same amount of sick days as federal workers).
I’m more familiar with law, but the closer I look at each industry where people claim AI is taking over I discover the same thing. Bosses want to replace their workforce with AI, but the AI is useless and the promises outstrip what it can do. As philosophers we should be less credulous about all this tech-hype (Student Philosopher’s comment above is helpful).
I’ll push back against the pushback. Here’s Adam Unikowsky (I know him from college), a pretty high power supreme court lawyer. If your brother doesn’t know how to use Google AI to generate good legal advice, that may just mean your brother hasn’t spent time optimizing the right AI engine to do it. Unikowsky did have to train an AI on a ton of supreme court briefs to get it to produce the results it did, but haven’t done so, its output was extremely impressive, or so he claims:
https://adamunikowsky.substack.com/p/in-ai-we-trust
https://adamunikowsky.substack.com/p/in-ai-we-trust-part-ii
One reason for the discrepancy may be due to the different implementations of AI use being described here. I would caution against judging AI capabilities from Google’s “AI overviews” feature. Since there are a ridiculous number of Google search queries being performed at any time, Google is probably using a cheaper model with lower inference costs to serve this feature. Moreover, the results are generated instantaneously — the model isn’t given time to think, unlike with the newest AI systems that take more time to think with more complicated queries.
I’m guessing that Eric would get better results asking the question to a frontier AI system like Claude 3.7. Even better if Eric were to upload some relevant documents into the context window, which would more closely mirror the implementation of these systems in professional legal contexts. This piece by Unikowsky describes using Claude 3 Opus and uploading relevant PDFs into the context window, which explains the better results obtained.
Let me add that this incident illustrates a broader reality which actually makes law a more promising haven than some others. My outsider perspective is that law is less technologically mature than many other fields, for example there’s still a shocking amount of reliance on physical paper (although my perception here could be outdated). I anticipate that there will be more resistance and more friction in the adoption of AI and other technological solutions that allow offshoring and gigification in the field of law than others.
Yes. A good way to think of it, I think, is as follows. Is the investment a legal partnership needs to make in identifying and training the best AI model to do junior associate work more or less than the investment they need to make in hiring a junior associate? Hiring a junior associate is a *huge* investment. So even if training an AI model to do junior associate work is non-trivial–you can’t just fire up ChatGPT and start asking it contextless legal questions, but instead need to pay for a top of the line model, and then spend some time fine-tuning it for your specific legal needs–there’s still a lot of scope for it being cheaper than hiring human beings who draw salaries and benefits year after year.
If the contention is that a Supreme Court lawyer can eventually coach the AI to do what he himself could do in less time, I concede.
I’m being a bit glib, but this is not the scary time-saving automation we are often threatened with.
I don’t think this is minor. It’s not like this is a rare skill that only supreme court lawyers have. If a supreme court lawyer can do it, I would bet tons of senior partners can do it too, maybe in a few years when the technology becomes more widespread (Unikowsky has a CS background, so I’d bet he’s a bit ahead of the curve on this.)
I’m surprised the FrontierMath results are being used here as evidence, especially given the linked discussion.
Why didn’t you pursue an academic job, OP? Per APDA, USC places students into TT jobs at an absurdly high rate of ~70% plus.
They mentioned they didn’t like teaching and didn’t want geographical constraints.
A few thoughts: in going to the non-academic job market, I think two things are absolutely key to securing a job (because the job market is difficult).
First, knowing and narrowing the sort of job that you want and researching to make sure it is realistic with the current market. In this job market, casting a wide net results in no fish. When you narrow down, you’re able to build the skills you need for that particular job.
Second, networking is absolutely essential, as mentioned in the article. When you have a narrow job that you are looking at, networking becomes a lot easier. You can talk to people in the field and learn how to be a competitive candidate.
In other words, no academic administrator would react to the news that Philosophical Studies is filled with LLM-generated papers and that AI is now better than humans at coming up with counterexamples to the KK principle by deciding “well, I guess we can now layoff the philosophy department”. Philosophers will largely be left to their own devices to figure out how to deal with the automation of philosophical research. The real economic value of philosophers for universities, of course, lies in students wanting to take philosophy courses.
I don’t agree that philosophical research has no economic value to universities. After all, most research-intensive universities make tenure and hiring decisions primarily on research capability. The idea here is that research is indirectly economically valuable to universities: better research causes higher rankings, which causes more grants and more students (things which are directly economically valuable to universities).
Assuming that in the near future, LLMs are able to do philosophical research better than philosophers, this suggests that research by philosophers will no longer be as economically valuable to universities, which may very well result in mass layoffs. Optimistically, LLMs may research better than philosophers but not teach as well (as OP suggested), and in this case, philosophers may keep their jobs but there will be a drastic shift in hiring/promotion criteria from research to teaching. In any case, I don’t see at all how philosophers will “largely be left to their own devices”.
As someone in tech, working in AI, with at least a third row seat to what’s going on, I can vouch for some of this but want to caution that this statement is unproven:
“ AI and Automation: The tech businesses that are downsizing are the very same ones currently building frontier AI systems explicitly designed to automate a massive amount of white-collar work. This could just be a coincidence, but I suspect that the front-row seat decision makers within these businesses have to the progress of AI development has given them a sense that they will soon, if not already, be able to do more with fewer human workers.”
In my experience, this narrative is more hype than reality. As someone working inside these companies, relatively little was being automated with LLMs. Some work was being enhanced, but no one’s work was replaced. Many efforts, even to automate some administrative work, failed.
It’s important to understand that business leaders stand to profit enormously in the short term from hyping AI, convincing others to buy it, and laying off workers to create the appearance of improved profit and increase the share price of their companies.
Many experts and practitioners in the field are skeptical that LLMs are capable of doing professional work to the same level as a human. Those sounding the alarm are alerting to the dangers of replacing an intelligent human with an unproven technology. Read work by Geoffrey Hinton, Gary Marcus, Timnit Gebru and even Meta’s Yann LeCun.
I urge that we do not fuel the propaganda of business leaders that stand to profit from outsized promises. Instead, we should demand that they prove the technology works before we take their word for it.
It’s difficult to separate the hype from the reality here, and my view is that there’s a mix of both and it varies greatly by specific professions and tasks.
More specifically, I believe that a lot of the hype about the current ability of AI systems is justified for some tasks involving writing code and natural language. The current systems appear to be pretty good at this and getting better rapidly. I could cite some more evidence but there’s a decent amount in the piece. They work less well when you have to integrate them with other systems, including for simple administrative tasks like filling out forms or sending emails. I would imagine that almost no work in accounting, for example, can currently be automated well both due to the systems not working well with, e.g., Excel spreadsheets, and due to the very high costs of a hallucination like making up a figure in a certain cell.
Of course, as you agree, the near-term effects on the job market are more about the hype than the reality. In the long-term, the reality will matter, and my bet is that those shortcomings will continue to be overcome.
This is a great piece. But I disagree when you say “academic philosophy is actually somewhat insulated from the market forces I have described”.
AIs can be used for grading. Universities are building online asynchronous programs. The courses in such programs are canned, and, after construction, the only role of the faculty is grading. If AI can do that, there’s no need for faculty at all.
Pretty much everything you’re saying transfers from the PhD level to the BA, meaning that it’s going to be very hard for philosophy undergrads to get jobs. There will be fewer and fewer philosophy undergraduate majors. Departments will close.
Based on anecdotal evidence from AI articles here on Daily Nous, philosophers are strongly opposed to using AI. (But there may be a strong selection bias at work.) If the entire economy is screaming: Use AI! And philosophy professors are banning it, students will have one more reason to avoid philosophy. Departments will close.
You write that students want to “come onto a campus and sit in a stately looking room with other students where an eccentric person who rambles on about justice makes them feel like they are in Dead Poets Society.” Very, very few students do that. Most come to a shabby room on a run-down campus where the instructor is poorly paid and not very qualified or interested. There is very little social incentive.
One thing I don’t think this article doesn’t mention is the alt-ac (as opposed to non-ac) job market: positions in teaching centres, grant seeking, student life, and all those other parts of the university where hiring over the past couple decades has greatly expanded while faculty jobs continue to decline. Those areas tend to focus on people skills, which are harder for AI to automate, and they’re ones where a PhD is a much more meaningful credential than in the non-ac world (because it means you can relate to facutly and know more than the typical undergrad about how universities work). My impression is that US alt-ac is not in a great position right at the moment due to the mammoth cuts to federal agencies, but I suspect it will recover as the dust clears. That isn’t to say alt-ac is for everyone, but I think it is one of the more promising exit strategies for a lot of PhD holders.
Thank you for this excellent if sobering post for which I hope you receive the compensation you deserve.
It appears that we have reached the arresting conclusion that sooner or later (but probably sooner than later) capitalism turns us into meat (or batteries). In light of this it is a bit strange that we still go on discussing the best survival strategies…
I think there are good philosophical reasons not to fall for the hype of AI.
I think there are good philosophical reasons not to fall for the hype of AI.
It appears that we have reached the arresting conclusion that sooner or later (but probably sooner than later) capitalism turns us into meat (or batteries).
Thank you for this excellent if sobering post for which I hope you receive the compensation you deserve.
Yes. A good way to think of it, I think, is as follows. Is the investment a legal partnership needs to make in identifying and […]
I don’t think this is minor. It’s not like this is a rare skill that only supreme court lawyers have. If a supreme court lawyer […]
If the contention is that a Supreme Court lawyer can eventually coach the AI to do what he himself could do in less time, I
It’s difficult to separate the hype from the reality here, and my view is that there’s a mix of both and it varies greatly by […]
One reason for the discrepancy may be due to the different implementations of AI use being described here. I would caution against judging AI capabilities […]
I’ll push back against the pushback. Here’s Adam Unikowsky (I know him from college), a pretty high power supreme court lawyer. If your brother doesn’t […]
Or you could do philosophy by having a dialog with an AI, or by training an AI.
Enter your email address to subscribe to this blog and receive notifications of new posts by email.

source

About The Author