Jump to content

Mirek S.

ESET Staff
  • Posts

    143
  • Joined

  • Last visited

  • Days Won

    2

Everything posted by Mirek S.

  1. Hello and thanks for report, To clarify, this issue is visible in endpoint configuration view or ESMC policy view ? If it is about endpoint configuration view most likely configuration module was not (yet) updated. (upgrade to EES 7 does the trick as well) Thanks.
  2. If you installed MDM on windows machine MDM HTTPS certificate chain (in this case ERA CA you used to generate certificate) must be imported into machine keystore. This requirement will be removed in 7.0 as we moved away from windows crypto api. https://help.eset.com/era_install/65/en-US/certificate_mdm_https.html HTH
  3. Certificates generated in ERA 6.3 (IIRC) and older did not contain CA (ie the chain was incomplete in certificate blob) You may import only ERA CA into machine certificate store on MDM host (description is in link mentioned by oliver) to workaround this issue, it should start working (recheck is done I think every ten minutes) However in near future (version 7) we will require full certificate chain in https certificate (it was bad decision this was not validated from start) These changes are done to improve support for Apple devices, and tighen security, so sorry this was not communicated better, however we can't currently auto-solve this for users. You may also use certificate change functionality added into 6.5 where if You change HTTPS certificate all devices will (based on their communication rate) switch to new certificate and old is eventually discarded.
  4. Possibly last idea if You're willing to accept giving users old password to EESA - You can immediately enforce new password via policy. So they will have rights to re-enroll however don't have rights to manipulate EESA afterwards. I'll send here someone from droid team if they can suggest any workaround for password requirement.
  5. Is Your previous certificate still valid ? If Yes - You can change to previous certificate (immediate timeout) Then start normal certificate change process with timeout and devices should be able to reconnect. Theoreticaly that is. (Unsure if certificate change process handles this scenario well)
  6. Well, the re-enrollment is not that extreme. You just go to webconsole and send email to all Your users (assuming You connected MDM with user management in ERA) and hope they click on the link... Certificate change with enrollment profile replacement is in same place certificate change used to be, in policy (It's new in 6.5 so You probably changed certificate in 6.4 where it happened immediately) You just specify till when can old certificate be used instead of new one You provide.
  7. EESA (Android application) uses certificate pinning - ie any change of server certificate invalidates trust between devices and mdm server. Bunch of errors are "most" likely devices refusing to talk to MDM. I'm sorry but You'll have to re-enroll devices (You can do this via re-enrollment task to keep history and configuration for devices) In 6.5 there was added functionality for certificate change with timeout - enrollment profiles are replaced in this timeout with updated certificate, I assume You didn't use this functionality to replace server certificate ? As a sidenote - log files could contain sensitive information, when asked for them please send them via PM. Please also erase those You uploaded (You took care to not mention domain while it's present in log files) HTH, M.
  8. Yes, we don't have application on iOS, iOS only supports it's built-in Mobile Device Management. You're welcome.
  9. short story: restart device. long story: devices/browsers usually cache previously accepted certificates, when this certificate changes on webside (You changed https certificate) browsers try to validate to previously accepted certificate and fail. I have noticed some browsers end in infinite loop of re-connection. (unsure if it was safari, but this was definitely reproduced inhouse)
  10. I'm definitely not the best person to answer that (I'm not really involved modules release cycle), so just the little I know. The risk depends on which modules are in pre-release mode Agent modules are relatively low risk as configuration module, loader module and translator are the only modules under update. Translator module is relatively safe, as is configuration module. Loader I don't think there will ever be problems with. Even pre-release updates are usually released only in batches. That is some number of downloads is permitted for new module so when there is error only a part of customers is affected. If issue is found module release is stopped and we can fix it (and possibly rollback customers to previous version) However I would not run production environment with pre-release. This option is meant for test environments or home customers who want to support eset by giving us an early feedback about possible issues. 99/100 You won't encounter problems (It's not like someone just goes through code and randomly pushes out modules with untested changes). But production environment should more like 9999/10000.
  11. You may want to open support ticket for this. - in MigrationTool log files there should be more information why database connection failed. This is most likely due to priviledges (tool did not require Administrative priviledges by manifest, and same database framework is used as ERAV5 had - despite being not so programmer friendly - to avoid database connectivity issues) Anyway logs would tell me most, they are generated next to tool (migration.log). Seeing the picure we did not provide link to a log file in fatal error screen, this will be remedied (doh)... As a sidenote new tool for V5=>V6 migration is in progress, so noticing this I'm going to revive this old thread in hope of feedback... (and possibly fix this as this issue never reached me in 3 years this tool is out)
  12. In previous (6.4 and older) versions of MDC stop managing task triggered 7 day interval where MDC tried to reach device and erase data which MDC put on it. - This is (also) for security reasons. When You want to stop manage device we don't know if it was stolen, sold, or lost to toilet In 6.4 (or was it second 6.3 ?) we allowed enrollment of stop managed device (ie device which was in 7 day we want to erase You period). - Many users don't care for re-enrollment and when something goes boom, they reinstall reenrollment can keep Your previous data (logs, policy and user setup), it's basically meant to re-connect device which for whatever reason stopped communication (due to our bugs, unforseen issues, wipe. etc..). In 6.5 this was enhanced so that when You "stop manage" and then "delete" device this does not re-appear in webconsole. We still try to reach the device and we still try to give You results of stop management task (ie You may want to know if Your profile with email password etc... was deleted, or if You must change passwords which were present on device before thief took it). - This however has other issues which are built-in architecture we use. If You stop manage device and immediately delete it, stop manage task may not reach mdc, so device will re-appear. So these two actions still must be separated by single replication cycle (usually few minutes is good enough, but depends on MDM replication settings) Enrollment of device on which "stop managing" task was never executed is prohibited. (users tend to cure this via DELETE from devices table :)) - This is for security reasons. From MDC point of view device is still managed, so whoever is trying to enroll may be attacker, which would gain access to policies meant for other device and prevent previously enrolled device to connect to MDC)
  13. Marcos is correct. I will add another piece to puzze When RemoteAdministrator V6 Agent is installed on computer with V5 product it will automatically and peridically reconfigure product to connect to localhost (ie to Agent) and attemt to reconfigure product to last known policy from V6 server. V5- and V6+ are inherently incompatible.
  14. Assuming the issue is present in Translator module and it's the same as we think it is - fix is available in 1579B. So either You can ask support to provide this module (I'm not going to do this on forums) and replace it in Agent installation (on *nix this is "/var/opt/eset/RemoteAdministrator/Agent/Modules/") , or You'll have to wait for this module to become available through normal update and Agents on affected computers are updated... You may also switch to prerelase updates in Agent update policy to get newer translator module. (if You dare to, it's prerelease after all, this should get You to version ~1583) UESETS (FileSecurity) does not use Translator module, so only Agent modules must be updated. If this issue is not what we think it is, we would need scan logs which fail conversion. (in binary format, once again however please send these via support as scan logs may contain sensitive information and You don't want to share it here)
  15. When debug log in Agent is enabled (in this case fault probably lies in Translator module) Legacy connector (EG1C) logs ESET module versions which are used. In Agent log look for lines like Module (nnn) Version: nnn and post them (Note: You may need to restart Agent for this log to appear). I suspect this issue was already fixed few weeks ago, and Agent is running with outdated Translator module.
×
×
  • Create New...