If there's one thing the test community loves, it's to argue terminology. If you doubt this, throw the question "What's wrong with the term Quality Assurance?" in any public forum or social media channel with a software testing and quality focus.
I'm going to highlight three of these semantic arguments, find the good and bad in them, and with brazen hubris declare how they should be now and forever resolved.
QAs, do you mind QA'ing this release?
Years ago I read an interview with a well-respected test consultant in which he was asked the one thing he wanted non-testers to know about testing and his answer was that they shouldn't use the term QA, that the term is looked down upon by the test community.
QA stands for Quality Assurance and it's true that taken literally we cannot assure quality, if we interpret this as test specialists guaranteeing that a product release has perfect quality. It's important to know that testing cannot guarantee the absence of bugs and that even the absence of bugs does not guarantee quality.
However, that's not on any non-tester's mind. Their idea of what QA is did not come from reading the dictionary definitions of the individual words that make up Quality Assurance, it came from how the term was used in the companies where they first encountered it. QA, for non-testers, is almost synonymous with testing. Almost. They often come to understand there's an ineffable extra 'thing that QA does' (asking tricky questions, critiquing things that weren't in the requirements) that doesn't fit their idea of testing (the latter of which would traditionally be called Quality Control), so they ask someone to QA instead of to test.
I've made the mistake myself of trying to move people from saying Quality Assurance to saying Quality Assistance (and there are many other alternative interpretations of the QA abbreviation, such as Quality Advocacy or Quality Acceleration). The battle to stop people from saying QA or Quality Assurance has raged for a long time (I first came across it in the late 2000s) and it's not worth the effort.
Instead of writing blog articles (am I guilty of it myself now?) and social media posts complaining about the term, which doesn't give the greatest impression to newcomers or casual guests to the test community, explain at appropriate moments what testing and quality specialists can contribute and try not to cringe when someone says QA (which in the case of "QAs" or "QA'ing" is really hard to do!) We don't have to break up conversations about things that matter with complaints about things that matter less.
Manual testing is a terrible term. It sounds like grunt work, like a repetitive chore. Scripted testing and exploratory testing are both called manual, but are vastly different in how much they resemble grunt work or a repetitive chore. This is especially significant because manual testing is contrasted with test automation while scripted testing can indeed be automated and exploratory testing cannot (unless you want to be pedantic and say that once a test scenario has been uncovered through exploratory testing you may then be able to automate it).
This term just has to go.
I've seen some traction within the test community for the idea of getting rid of the manual vs automated dichotomy altogether and instead talk about "attended" and "non-attended" testing. I think this is another battle that's doomed to be lost, especially considering how thought leaders in the test community have typically engaged with the wider development community (either very little or in an adversarial way). The persuasive power needed is entirely absent.
A more realistic - and, I think, elegant - solution is to keep the manual vs automated dichotomy, but simply replace "manual" with "human". Contrasting test automation with "human testing" feels apt, easy to understand, and replaces the association of grunt work with an association of conscious presence and higher reasoning. It's difficult to imagine someone getting confused because humans are involved in test automation as well, since no-one was confused about test automation involving manual work despite being called automated.
Non-functional requirements (NFRs) have been an industry term for many years and used as meaning the additional requirements beyond requirements describing the functionality to be implemented. However, some have realised that with a bit of ill will it's possible to interpret the term as being requirements for an application not to function. And surely we want applications to function?
I don't think anyone has ever been genuinely tripped up by the term and thought that it meant an application should be implemented in a way so that it doesn't function, but rather that this is used as a rhetorical device to advocate for more precise language, but either way it has led to alternatives like extra-functional, para-functional, or cross-functional requirements. To a person who doesn't know anything about the background, it's not very intuitive to hear someone talk about para-functional or especially about extra-functional requirements. Are they talking about extra functional requirements (note the absence of a hyphen)?
Admittedly the term non-functional requirements is a bit imprecise, but instead let's use cross-functional requirements. It's by far the most commonly used alternative already. Personally I also find it more intuitive, as it clarifies that they apply across all functionality - for example in the form of SLAs - instead of applying only to a specific feature - for example as defined in a Jira ticket's acceptance criteria.
Don't get too excited about the imperfection of industry-standard terms. You want people to understand what you're talking about. If you get a chance to use better terms (such as human testing instead of manual testing), promote their use in a human-friendly way (such as adding "also called traditional term" or a very succinct explanation).