Wish: Let customer choose initial widget screen (ie, link directly to chat)

For visitors who aren’t already in an active chat, the widget defaults to a screen that shows multiple choices: “Help Center” (with a search box), and “New Conversation.”

If the customer disables the “Knowledge Base” in Admin → Widget Settings, then “Help Center” goes away - yet the widget still requires the user to read/understand the interstitial screen and click “New Conversation.” There’s no way to reliably link to/show the chat pane.

This is most relevant when the Tawk customer uses the “Direct Chat Link” triggered by a link/button on the customer’s site. For example, I’d like to design a “Start chat” or “We’re online - ask us anything” link into our site design, but that isn’t possible - the link can’t actually drop the user into a chat session. There is no way to directly link to a chat session, only to the multi-function Tawk home screen.

Some possible ways to improve this, including one - (A) - that seems like a quick win and should be the default now:

A. When displaying the widget in the “Direct Chat Link” URL, the only enabled module in Widget Settings is Chat, and there are no recent chats, show the new chat pane by default. Don’t make the user work through another screen (that only has 1 relevant choice) and click “New Conversion” - that just decreases conversion.

It seems like this should be the default, in that the whole point of a “Direct Chat Link” would be to link to chat. If Tawk also wants to offer a “Direct Support Link” (that shows all support modules like the current link does), great, but there should be a legitimate direct link to chat.

B. A bit more work but also more flexible: in the “Direct Chat Link” (and maybe also the Tawk_API.showWidget() function), let the implementer specify the initial screen. For example: https://tawk.to/chat/abc-def-123?show=chat (and if other modules might benefit from being directly reachable, other arguments like https://tawk.to/chat/abc-def-123?show=kb).

(If you wanted to offer that functionality through the JS API as well, the equivalent could be Tawk_API.showWidget('show': 'chat') or maybe a separate Tawk_API.switchScreen('chat') function. Not at all required though.)

(B) seems like a question of work for benefit, but (A) seems like a more thoughtful default for existing users.

Hi @Alfonso7 ,

Thank you so much for sharing such a detailed and well-thought-out suggestion. We truly appreciate the time and effort you put into outlining both the issue and the potential improvements.

Your ideas make perfect sense—especially your point about enhancing the chat widget’s behavior to automatically open the chat pane when no other modules are active. This would definitely create a more seamless experience and reduce friction for visitors who simply want to start a conversation.

I’ll make sure to pass your feedback along to our product team for review and consideration in future updates.

Thanks again for your valuable input—insights like yours play a big role in helping us improve the platform.

Hi,

Okay. Thanks for responding.

While I’m here, this response inspired one other piece of feedback: it feels like this response was mostly or entirely written by an LLM. Although it’s not unethical, as a customer, it has some significant weaknesses that might not be obvious. Since this is an important topic right now, I took a couple minutes to write them down:

  • The best an LLM can do is a mediocre, fairly bland response. This is probably the biggest problem. There’s nothing unprofessional about this response, it just feels like I’m interacting with Claude or ChatGPT rather than a human who has opinions and expertise. (For some businesses, just delivering consistently mediocre responses is a lofty goal - and for those businesses, maybe an LLM works. For my businesses, I aim to deliver good or great support.)
  • It’s signed by a human, even though it seems like a machine wrote most or all of it. This is probably the simplest improvement: when an LLM writes most of a response, transparently say that. That way, at least the recipient knows they’re evaluating a tool, not the vendor’s staff, competence, or culture.
  • It feels “flowery.” LLMs tend to generate many words but not say much.

On that second point, when a person sends output of an LLM without disclosing the source, it becomes impossible for the customer to evaluate the company’s actual competence - which hopefully is higher than the LLM’s :slight_smile: An average customer doesn’t know that a response was written by an LLM, they just evaluate the output and conclude that the support and company culture is bland. Contrast with a response like “Hey, good idea! Here’s what an LLM said about your suggestion:” (and then pastes the boring LLM output) - I’d still prefer a human’s opinion, but at least it’s clear what part of the response I should ascribe to the company’s skill.

On the first point, today, LLMs are poor substitutes for thoughtful, savvy, opinionated support engineers. Maybe in 3 or 5 or 10 years, that’s no longer true. As a customer, I’m fine interacting with an LLM, but just expose the LLM to me directly and tell me that’s what you’re doing: “Here’s our AI chatbot. Give it a try if you want. If it answers your question, great. If not, click here to send your request on to a real person.” I can’t think of a situation where I want a human-mediated LLM response. The good uses of LLMs in support make completely clear to the customer that they’re reading the output of a machine, so those good uses tend to be self-service.

Hi @Alfonso7 ,

Thank you for taking the time to share such thoughtful feedback — we really appreciate it. You’ve raised some excellent points about transparency, tone, and the role of AI in support communication.

We completely agree that authenticity and clarity are key to maintaining trust, and your perspective helps reinforce that. We’ll definitely review how we use AI assistance in our responses to ensure they remain both genuine and helpful.

Thanks again for sharing your insights — this kind of feedback is incredibly valuable as we continue improving how we communicate with our customers.