Will we have a DT 4 in the near future?

AI corrupts and I will gladly back that up with an example. I’ll try to keep the venom and spittle to a minimum even though I’m bothered to the point of a very interesting nightmare last night.

Texas gauges public school performance with STAAR tests, the State of Texas Assessments of Academic Readiness.

Students are naturally prohibited from using AI to produce assignments or to answer test questions.

STAAR test essay questions this year will be graded mostly by AI. 25% of the tests will be graded by educators.

I wrote my first letter of protest to the Texas Governor when I was in sixth grade. It’s a given I would have presented my teachers with two choices, either of which I would have respectfully abided.

Either certify for me that integrity counts and I can start introducing AI into my assignments, just like the school, or assure me education in this instance is based on hypocrisy. Either AI has a place in the classroom or it doesn’t.

Students have assignments. They complete homework. Teachers have always told me they have assignments, too. They grade homework and their assignment is tougher.

Tough assignments, faithfully completed, bring rich rewards. Do I hear education leaders saying credit for academic achievement should shift, at least a little bit today, away from teachers?

Yeah, I get it. I like to write which is like saying I like to manufacture buggy whips. Like the chickens in KFC’s processing line, I’m grumbling.

It’s OK. Like the chickens, I’ll soon enough be silent.

1 Like

I was a die-hard Obsidian user for two or three years. I switched back to DevonThink Friday and am loving it.

Will I still be using DevonThink next Friday? Don’t know. I’m super-fickle about productivity software.

Obsidian is great for people with coding skills who like to tinker. You can tinker with DevonThink too, but it’s designed to be more set-and-use.

Obsidian is designed to do everything in Markdown. Support for PDFs, rich text and Microsoft Office is poor. DevonThink is much better at supporting those document types. And DT supports Sheets too, which is much easier to use than DT’s tables, DataView and DB Folder.

Also, DT and Obsidian go well together.

My marketing advice to DT is to ditch the version numbers. Just call it “DevonThink.” Things 3, by Cultured Code, has a similar problem. This marketing advice is free and worth double what Devon Technologies is paying for it.

2 Likes

Disruptive innovation requires a number of factors:
(1) A low cost entrant
(2) Offering capabilities that some market segment needs and wants, allowing them to
(3) Build a user base and more expansive capabilities that eventually
(4) Allow them to displace the incumbent.

Digital film initially served the low end “sharing pictures with friends” segment, only gradually displacing the “professional quality images” segment.

Obsidian’s ability to serve the “casual notetaking” segment does not in itself make them disruptive to DT’s command of the “industrial-grade research database” segment. To get there, they need step #3, which means that they need developers who themselves want a DT replacement.

4 Likes

I’m not actually convinced of that. What ML providers have successfully monetized their products as standalone products? For Google to use ML to serve ads more effectively is an augmentation to their existing business model. For Open AI to let people play with ChatGPT for free while they burn investors’ money is not successful monetization.

3 Likes

Does it really matter if ML is a “standalone product” or not? IMO it’s apparent that “software technology” cannot live as a standalone product independent from its “hardware”. Self-driving depends on an automobile with wheels. Generative AI relies on the computing power of specialized equipment. Machine learning, as a software/algorithm, depends on relevant hardware and a certain use case. Naturally, ML providers package their hardware, software and use case together, since ML is, by its nature, not a product at all if standalone.

Whether something is an augmentation or disruption is completely subjective. IMO, Google Ads’ use of ML is disruptive to human analyzers. Nvidia is disruptive to Intel and AMD. This is why business nonsense like disruptive innovation holds little practical value. How can it be a useful theory if you have to constantly maintain purity by telling everyone else that their definition of disruption is wrong?

And in this case the “hardware” is a data center. Those run about a half-billion dollars each. It’s premature to call ML a “low-cost” entrant to the market until we see business models (rather than investor cash) that will actually support those investments. Meanwhile my Mac Studio and DT are happily chugging away.

As for the definition of disruptive innovation, take it up with Prof. Christensen.

1 Like

I believe any idea has a warranty period, after which its original proposer no longer holds definitive authority regarding its explanation. For disruptive innovation, an idea coined in 1995 and oriented towards a highly volatile business environment, this warranty period has long passed.

Of course, that the original proposer hold no authority does not mean that their explanation has been thoroughly discarded. Rather, it means that their opinion does not necessarily have more merit than another commenter’s, especially if the “other commenter” (not me; check out the HBR article I posted) represents mainstream interpretation.

In other words, the drift of the theory does not necessitate a return to its origin, as your comment implies. Rather, the fact that a drift has happened is a demonstration of limitations of the original. This assertion forms the basis of the nascent discipline of conceptual history. YMMV as usual.

Whether the “cost” of a single, massive data center is higher or lower than that of millions of personal devices depends on your definition of “cost”.

Well, the personal devices aren’t going away, even if the applications they run change.

But actually the end user cost of a data center depends on how many users there are, and how much they pay. 500 million users paying a dollar each? Or ten corporate users paying $50 million each? But my underlying point stands. Until ML vendors are at least attempting to recover their costs, it’s impossible to say whether ML really is a “lower cost” alternative.

It seems you’re assuming that the only way to “recover” costs is directly from paying customers. In reality, however, costs can be “recovered” indirectly from investors as well. Microsoft’s soaring stock price in the past year has more than recovered the company’s investment in OpenAI.

On a broad scale, select investors represent a much larger base of customers. OpenAI’s costs are recovered from investment in Microsoft, which in turn is ultimately paid for by ordinary customers through a complex chain of business relations. This process of delegation happens as often in business as it does in politics.

Well, a company can continue to spend investors’ money for as long as investors are willing to support them. But ultimately, survival as a going concern requires profitability. The tech industry in particular is littered with the corpses of companies that forgot that most fundamental rule.

Generally speaking, when companies start pretending that profits don’t matter, that’s a good time for smart investors to take their money elsewhere.

3 Likes

I completely agrees with you that OpenAI will die if it cannot sustain investor interest or become profitable. This brings us back to the topic of disruptive innovation.

Question: Can a company be considered a disruptor if it dies due to lack of profitability?

My answer is a firm yes. This is well illustrated in the “Uber revolution” of mainland China: Virtually all of the initial entrants were gone within a couple of years, since they could not keep subsiding customers forever. However, as people get accustomed to Uber-ing, late-comers are able to sustain profitable business without using generous subsidies. And the traditional taxi industry has essentially been wiped out. Disruptors have died; the disruption lives.

Same goes for OpenAI: even if it would not survive to the end, the disruption has already taken place. Many for-profit AI service providers, which do not bear the burden of training their own mammoth model, have sprung up and apparently been running well. Does it really matter whether OpenAI becomes profitable? Nope, as long as others do. Whatever OpenAI’s fate will be, history will probably remember it as the pioneer of an era of disruption. By whom was that disruption funded? The investors.


P.S. It seems we are holding different perspectives on the same story. For you, the business’s (first-person) perspective; for me, the bystander/commentator’s (third-person) perspective.

Propaganda, characteristically, performs well in first-person perspectives, but does not hold up in third-person perspectives. (Think of e.g. nationalism.) Thus I’m inclined to put the disruptive innovation theory in this category.

1 Like

And by creditors if the company goes bankrupt.

But that’s a minor point - I agree with your overall assessment.

How is that happening? Microsoft would profit from soaring stock prices only of they own their own stock. Massively, to offset the investments in this case. And eventually, they’d have to sell the stock to convert it to real money. I think.

1 Like

What’s considered a local node map? A map of connections to/from a document?

This is, again, a matter of definition. Do you view the people or entities that own Microsoft stocks as more or less part of the company? Do you think Microsoft’s investment in OpenAI could also be considered as the MS stock owners’ investment? If one agree with both, then they could confidently say that Microsoft is indeed profiting from its rising stock prices.

Indeed. Ideally, secondary and tertiary connections could be visualized as well, which makes the node map more powerful than the links inspector (only primary connections are displayed) for certain use cases.

Obsidian has two node maps, which are charts of related documents.

The global node map is prized by some, said to be of little use by others. I’m in that camp.

Local node maps show connections to nearby nodes. It’s a graphic representation of what you see in the Documents list in the See also and Classify inspector tab in Devonthink.

The local node map is kind of nice. There is a great script in these forums by Benoît Pointet that will generate a node map for selected nodes, so there is already a way to create a local node map - but unavoidable limitations prevent it from updating “live” as things change in Devonthink.

It’s a great script, though. Very handy.

1 Like

And would also require more overhead :wink:

1 Like

Well, this thread has fallen far off the path of the initial inquiry, so I will succinctly answer and close it. Tangents can be taken up in other threads, as the need for discussion arises. :slight_smile:

Will we have a DT 4… Yes.
… in the near future? We don’t comment on development timeframes and near future is subjective.

It will appear when it’s ready but rest assured we are always hard at work.

11 Likes