Jump to content

The ESET LiveGrid servers cannot be reached


steingat
 Share

Recommended Posts

On 12/17/2021 at 11:40 AM, steingat said:

We get this error on a number of our endpoints who are working from home.  There are all personal internet connections the we do not control.

Are these users using a VPN to connect to the corporate network?

Link to comment
Share on other sites

1 minute ago, itman said:

Are these users using a VPN to connect to the corporate network?

The most problematic endpoints are.  But other endpoints who connect to the VPN with the same configuration do not have issues.

Link to comment
Share on other sites

  • Administrators
2 minutes ago, steingat said:

The most problematic endpoints are.  But other endpoints who connect to the VPN with the same configuration do not have issues.

Do you know what happened on ******NV machine between

01.01.2022 10:05:21.206 [2692:8116] INFO [DC] <dc_connector>: [AVCLOUD] Resolved '38.90.226.13' (initial, TTL: 322)
and

01.01.2022 10:22:42.755 [2792:2976] ERROR [RESOLV] <dns_resolver>: Failed to get new name server addresses (error: 7013)
01.01.2022 10:22:42.755 [2792:2976] DEBUG [RESOLV] <dns_resolver>: Obtained no system DNS addresses
01.01.2022 10:22:53.009 [2792:3624] ERROR [DC] <dc_connector>: [AVCLOUD] Resolving hostname 'i4.c.eset.com' failed (error: 10001 CONNECTION_ERROR)

 

Link to comment
Share on other sites

Its possible that the user disconnected or reconnected to the VPN.  But no other changes would have taken place on the users machine.

Link to comment
Share on other sites

In regards to the similar LiveGrid connectivity issue above link I posted, below is an extract from the log shown in that posting I feel is significant to this issue:

Quote

24.11.2021 19:56:39.611 [3688:16116] INFO [DC] <dc_client>: [type: AVCLOUD, channel: DIRECT_UDP] Secret exchange: started...
24.11.2021 19:56:39.638 [3688:16116] INFO [DC] <dc_client>: [type: AVCLOUD, channel: DIRECT_UDP] Secret exchange: done
24.11.2021 19:56:39.666 [3688:16116] DEBUG [DC] AvCloud resolve succeeded (response size: 128)
24.11.2021 20:01:55.724 [3688:18708] ERROR [DC] <dc_connector>: [AVCLOUD] Resolving hostname '' failed (error: 10004 INVALID_PARAM)
24.11.2021 20:01:55.724 [3688:18708] ERROR [DC] <dc_connector>: [AVCLOUD] Resolving failed; No cache available
24.11.2021 20:01:55.724 [3688:18708] ERROR [DC] <dc_client>: [type: AVCLOUD, channel: HTTP] SendAndReceive failed: no connection (error: 19)
24.11.2021 20:01:55.724 [3688:18708] ERROR [DC] AvCloud resolve failed: internal resolve failed (result: 21202)
24.11.2021 20:01:55.724 [3688:18708] ERROR [DC] <dc_connector>: [AVCLOUD] Resolving hostname '' failed (error: 10004 INVALID_PARAM)
24.11.2021 20:01:55.724 [3688:18708] ERROR [DC] <dc_connector>: [AVCLOUD] Resolving failed; No cache available
24.11.2021 20:01:55.724 [3688:18708] ERROR [DC] <dc_client>: [type: AVCLOUD, channel: HTTP] SendAndReceive failed: no connection (error: 19)
24.11.2021 20:01:55.724 [3688:18708] ERROR [DC] AvCloud resolve failed: internal resolve failed (result: 21202)

First, note that all log entries prior to that shown were successful LiveGrid server connections.

The most significant entries shown in the above log extract are the first two entries. It appears to me a new source server connection occurred and handshaking processing is taking place. The next log entry appears to indicate the handshaking processing was successful. However, all subsequent LiveGrid server connection attempts are failing?

Note the "No cache available" entries. This would be indicative of DNS cache issues on the origin relay DNS server perhaps?

Edited by itman
Link to comment
Share on other sites

As an update, as I am not sure what changed, but all of our problematic endpoints seem to have been fixed and none that are currently connected are reporting a cloud connection problem.  This seems to have happened with in the last few hours

Link to comment
Share on other sites

6 minutes ago, steingat said:

As an update, as I am not sure what changed, but all of our problematic endpoints seem to have been fixed and none that are currently connected are reporting a cloud connection problem.  This seems to have happened with in the last few hours

I have seen a noticeable declines as well.  We are down maybe 300 from the 2000 we were seeing.  

Link to comment
Share on other sites

Appears to me the issue will affect anyone using Cogent network in the U.S.. Eset has a forwarding server on that network. Appears it wasn't properly forwarding to Eset Net in Europe.

Edited by itman
Link to comment
Share on other sites

I get this once in a while on my domain connected machines. (100+ endpoints, all wired, behind the same router.) And it goes away when the client is rebooted. Can the warning be safely ignored? Shouldn't it continuously try to connect to LiveGrid servers, or does the connection only happen once when booted?

Link to comment
Share on other sites

  • Administrators
9 hours ago, FRiC said:

I get this once in a while on my domain connected machines. (100+ endpoints, all wired, behind the same router.) And it goes away when the client is rebooted. Can the warning be safely ignored? Shouldn't it continuously try to connect to LiveGrid servers, or does the connection only happen once when booted?

I would not ignore the warning since LiveGrid is crucial also for post-execution protection, e.g. Ransomware shield.

If you're able to reproduce it easily, I'd recommend:
- enabling advanced logging under Help and support -> Technical support
- rebooting the machine
- disabling logging when the LG warning is reported
- collect logs with ELC and provide them for perusal.

From what you wrote I gather that the machines are always connected only by wire and no changes in network occur, such as connection to VPN.

Link to comment
Share on other sites

  • Administrators
14 hours ago, itman said:

Note the "No cache available" entries. This would be indicative of DNS cache issues on the origin relay DNS server perhaps?

This refers to the internal ESET's direct cloud cache.

Link to comment
Share on other sites

  • Administrators
14 hours ago, steingat said:

As an update, as I am not sure what changed, but all of our problematic endpoints seem to have been fixed and none that are currently connected are reporting a cloud connection problem.  This seems to have happened with in the last few hours

You can monitor iris.dns.0.log and provide the output of running "ipconfig /all" when "Failed to get new name server addresses" is logged. We assume this happened on a computer start and the fallback to Google DNS failed because the communication was probably blocked on a firewall.

Link to comment
Share on other sites

We had this for a long time. It was completely random with v8.1.

Problem was that when you get notified, it is probably already resolved. Status turns green on next refresh in console or on client with no action on our side. When you reboot right after notification, it is OK immediately.

I was trying to get logs, but it is uncatchable, there was a PC, where we had 5 incidents per hour, we started loging and it never shown again.

It calmed down almost comletely in recent weeks and I didnt see this warning on verison 9, but we just started upgrading (we skipped mass upgrade to 8.1 because of this).

 

Link to comment
Share on other sites

  • 4 weeks later...

Along the same lines, is there a way to force LiveGrid to only use the US servers?  As a result of a ransomware attack a few months back, we have geo-blocked most countries on our firewalls and only open on a case by case basis.  Austria and Slovakia are not on the list at this time.  Needless to say, we get the live grid unavailable popup throughout the day.  

Link to comment
Share on other sites

  • Administrators
8 hours ago, Robert Palmer said:

Along the same lines, is there a way to force LiveGrid to only use the US servers? 

This is not possible, however, you should be routed primarily to US servers.

Link to comment
Share on other sites

In some cases we are not, the issue did come back on our end too after working correctly for a few days.  This would be a nice policy option to have for the next version of eset.

Link to comment
Share on other sites

16 hours ago, Marcos said:

This is not possible, however, you should be routed primarily to US servers.

For the most part that is the case.  However, probably 8-10 times per day my users get the popup stating livegrid is unavailable.  Obviously for non-technical users this causes some anxiety that they are not protected.  Regardless of how many times we tell them it is not an issue with the protection on their computer.

Link to comment
Share on other sites

  • Administrators

Wouldn't it be an option to allow communication with ESET's servers while keeping the geo-ip block in place?

Link to comment
Share on other sites

12 minutes ago, Marcos said:

Wouldn't it be an option to allow communication with ESET's servers while keeping the geo-ip block in place?

Unfortunately no.  With our firewalls the geoblock is enforced at the country level with no overrides on specific servers\domains.  However, I did find this option in the console which will at least accomplish the desired effect of stopping client side notifications.

image.png.b1537aae40ab3001dfca2c4051ba2eb4.png

Link to comment
Share on other sites

Guest
This topic is now closed to further replies.
 Share

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...