It’s easy to update the software on your Honor 8X

Honor is always adding new features and making improvements to its smartphones. The Honor 8X will get such love throughout its life cycle. You’ll always want the latest features and updates, so it’s worth knowing how to pull down those meaty software updates once they arrive.

Products used in this guide

How to update the software on your Honor 8X

It’s no sweat to update your Honor 8X if there’s one available for your phone. Here’s what you do:

  1. Charge your battery to at least 50% to avoid interruptions during the update.
  2. Try to get connected to Wi-Fi to avoid any issues with your download.
  3. With the device powered on, pull down the notification shade.
  4. Tap the settings icon in the upper right corner.

  5. Once in settings, scroll down until you see “System”. Tap it.
  6. Now, tap “System update.”
  7. The Honor 8X should automatically look for the latest software. If not, manually check using the “Check for Updates” button. If you’re on the latest build, it will tell you so and you’re free to exit this menu.

  8. If an update is available, it’ll pull up the changelog and ask if you want to download it. Tap the download button to initiate the installation process.
  9. You’ll see a progress bar letting you know where the download is. Once it’s finished, you can choose whether to install it immediately or overnight. Select “Immediately” to install it right away.
  10. If you’d prefer to do it overnight, select “Overnight” and ensure the device is plugged in and connected to WiFi between the hours of 2AM and 4AM.
  11. Once your update has been installed, your device will reboot to finalize it.

You should now be on the latest software. For good measure, repeat the steps above. This will verify that you actually installed the latest software and that it wasn’t just a prerequisite for another update.

Our top equipment picks

Deceptively Powerful


Honor 8X

A new standard of quality.

Honor has long redefined what a cheap smartphone should be, and the Honor 8X strengthens that definition more than any of its predecessors. It combines modern looks and technology while somehow maintaining a price tag that’s less than $250.

Premium looks, an AI-powered camera, strong internals, nice battery life, and a FullView display make up the core of the Honor 8X experience. Sprinkle on other niceties like a fingerprint sensor and NFC, and you’ll save yourself several hundreds of dollars without sacrificing a quality Android experience.

Additional Equipment

You might need a longer Micro-USB for your Honor 8X if you want to update it overnight while keeping it close by.


Anker PowerLine+ Micro-USB Cable
($11 at Amazon)

Anker’s 6-foot Micro-USB cable is double braided with nylon material to keep it working for a really long time.

This post may contain affiliate links. See our disclosure policy for more details.

Read More

Samsung One UI (Android 9 Pie) review: Still Samsung’s software

Samsung One UI (Android 9 Pie)

The update to Android 9 Pie with a new One UI interface is the biggest visual change Samsung has pushed to its phones in years. Alongside all of the requisite improvements you expect with any software update, Samsung’s Pie release makes substantial changes to the look and operation of the entire software experience. But naturally, much of Samsung’s legacy software is here to stay as well, making sure you’ll never forget what kind of phone you’re using.

This is how it all comes together in Samsung’s latest software: One UI and Android 9 Pie.

Samsung One UI What I like

Samsung One UI (Android 9 Pie)

Every couple generations, Samsung makes a big step forward in software design and capabilities. I’d argue that Samsung’s software got good starting with Android 5.0 Lollipop on the Galaxy S6; but it really only stepped up to being great with Android 7.0 Nougat on the Galaxy S8. One UI with Android 9 Pie is undoubtedly another major step forward.

One UI is effectively a complete redo of Samsung’s interface, colors and iconography from top to bottom. Far more than a coat of paint. The software is even further flattened, and puts emphasis on whites and greys with liberal use of soft-radius corners and negative space. In many ways One UI follows Google’s Material Design principles — take a look at a Pixel 3 alongside a Note 9 and there are striking similarities in the notification shade, settings, multitasking screen and menus. Visually, this is the best software Samsung has ever made. The consistency of design implementation across the interface and apps is near-perfect, and it feels light and modern both at a glance and with use.

Samsung One UI and Android 9 Pie

Samsung One UI and Android 9 PieSamsung One UI and Android 9 PieSamsung One UI and Android 9 PieSamsung One UI and Android 9 PieSamsung One UI and Android 9 Pie

The improvements aren’t limited to style; they also have some substance. The biggest change in principles across the entirety of the interface is the move to bring interaction points further down on the screen so you don’t have to reach to the top of the phone as often. Throughout the interface and Samsung’s own apps, the main interaction points have effectively shifted down to only fill the bottom two-thirds of the screen — the top third, in return, is mostly for viewing rather than touching.

Samsung apps on Android 9 Pie

This leads to some awkward blank space at the top of many apps, but when you stop thinking about it and realize the benefits of not having to reach the top of the screen — on increasingly tall phones — it makes a ton of sense. The paradigm of course breaks down when you use third-party apps, but Samsung has done everything it can for its own software to make things a bit more one-handed friendly without having to jump into the dedicated one-handed mode as often.

This is also the first Samsung software to incorporate a full dark mode option, available at the tap of a button. It isn’t as customizable as what OnePlus offers, nor is it dynamic like Google’s, but it’s miles ahead of having to apply a system-wide third-party theme as before. The dark mode is actually called “Night Mode” and is ostensibly designed to be used at night to reduce eye strain, but it’s completely separate from the blue light filter and isn’t able to be set to turn on/off on a schedule. But that’s just fine for the dark mode die hards that want to run it 100% of the time. And unlike the theme approach, night mode touches every part of the system and Samsung’s apps for a complete black-out look. Not everyone is a fan of dark mode, but the contingent of dark mode fans is too big to ignore at this point.

Samsung One UI night mode dark theme

Samsung One UI night mode dark themeSamsung One UI night mode dark themeSamsung One UI night mode dark themeSamsung One UI night mode dark themeSamsung One UI night mode dark theme

Although conservative, Samsung’s first step into the world of gesture-based navigation is successful. It (smartly) chose not to fully implement Google’s navigation system, nor did it go its own way with something altogether new and elaborate — the result is a navigation system with copious options that help blend the best of Oreo and Pie. Critically, if you want to keep things exactly as they were you can hold onto three-button navigation (and yes, you can swap the back and recents buttons). But Samsung included Pie’s home button left-to-right swipe gesture for quick multitasking, while also retaining the recents button. If you prefer to save some screen real estate, you can replace all three buttons with simple swipe-up gestures in their place. No complicated mix of multiple gestures or different lengths of swipes; just a straightforward replication of what you know from the buttons.

The entire interface looks better, is easier to use, and simply integrates Pie’s gestures and multitasking.

No matter how you get there, the multitasking screen itself looks near-identical to Google’s own, with horizontal cards and smartly predicted app suggestions at the bottom of the screen. Some decry the lower information density in this view compared to the old Rolodex-style interface. But this is far more user-friendly and provides quick access to the last three apps you used, plus suggested apps, which is what most people want most of the time. Samsung managed to incorporate gestures and many of Android 9 Pie’s native multitasking features cleverly, while retaining the old options for those who update from Oreo — it’s the best of both worlds.

One UI is filled with lots of little tweaks strewn about its apps, but these have far less impact on the daily experience that’s so often dominated by third-party apps people often prefer over Samsung’s built-in options. The camera app got some of the biggest user-facing changes, but for the most part all of the apps were simply redesigned to fit the new design language rather than dramatically change their function. But it shows that Samsung is continuing to focus on visual consistency across its apps — even if not everyone uses them.

Samsung One UI What’s not great

Samsung One UI (Android 9 Pie)

One thing that’s remained constant through every revolutionary Samsung software update is that it’s still undeniably “Samsung” software. It should come as no surprise that the prior statement kicks off the section of this review where I talk about things that aren’t so great. The biggest issues with One UI have nothing to do with what Samsung did in this generation — the issues are completely rooted in Samsung’s software legacy.

Samsung’s Pie software still suffers from extreme amounts of cruft throughout the system.

Samsung’s Pie software still suffers from extreme amounts of cruft throughout the system, with overly-complicated settings and duplicative apps. And while some basic functions have been lifted to the surface, changing many aspects of the phone still requires jumping through several layers of settings to tweak things how you want. Samsung’s launcher still shows a few glimpses of the “old” Samsung, with folders that don’t adhere to One UI design and a general aesthetic that feels stuck in the past. Throughout One UI, you see a lot of the experience that remains functionally unchanged from the previous version.

One UI and Pie also didn’t address many bugs (or at least, issues) from Oreo; for example, Pie still offers poor auto brightness management in dark environments, often blasting the screen up near max brightness in a dark room. And multi-colored media notifications still clash heavily with the notification shade. Drill down through the software and you still find these little issues haven’t been fixed, but are things that should be addressed in a platform update.

Despite many visual improvements, the always on display and lock screen feel stuck in the past.

Despite many of the visual improvements to the lock screen experience, Samsung hasn’t really changed or improved the functionality of either the lock screen or always on display in years. What was once a strength now feels like an old take on the idea, with Google, OnePlus, Motorola and others all doing a better job of integrating the always-on (or ambient) display with the lock screen. Samsung’s always on display, lock screen and home screen feel like distinctly different experiences layered on each other, rather than different parts of the same interface — the jump from one to the other is not visually seamless nor particularly useful, and the lock screen’s disjointed take on “widgets” and displaying notifications is clunky at best.

And I know it’s a small thing, but Samsung’s suite of ringtones and notification sounds feel stuck in the Galaxy S5 era; it’s time for a refresh. Some of the core system sounds have been updated, but the rest feel a bit too … cartoonish for my taste. Samsung’s software is no longer bright, colorful and playful as it once was, but the sounds it offers are very much stuck in that time period.

All of Samsung’s best design ideas from 2018 can’t escape over a decade of built-up Samsung cruft. Now I freely admit that the level of hack-and-slash gutting of Samsung’s software that would be required to actually streamline it is probably impossible at this point; but it’s important to recognize that just because One UI looks appreciably better doesn’t mean that it’s free from all of Samsung’s years of baggage.

The future of Samsung’s software

Samsung One UI (Android 9 Pie)

Samsung’s Android 9 Pie release, with the new One UI interface and features, is something every Samsung phone owner should look forward to. Whether you have an older phone and will need to buy a Galaxy S10 to get it, or are patiently waiting for an update on the Galaxy S8 or Galaxy Note 8. One UI is easily Samsung’s most cohesive and visually impressive version of its operating system, even if it is bogged down by the baggage of versions that came before it.

The software changes are more than skin deep with One UI. Yes it’s beautiful, more modern feeling, and more consistent; but it’s also functionally better for one-handed use and has little feature improvements sprinkled throughout. Its new navigation design perfectly integrates the best of Oreo and Pie. And aside from a few wonky icons that some people aren’t fans of, the visual changes are a welcomed breath of fresh air that actually follows Google’s own design guidelines. This is software you’ll want on your phone, and it’s a great sign of how far Samsung’s come with its software design in over a decade of making high-end smartphones.

This post may contain affiliate links. See our disclosure policy for more details.

Andrew Martonik

Andrew is the Executive Editor, U.S. at Android Central. He has been a mobile enthusiast since the Windows Mobile days, and covering all things Android-related with a unique perspective at AC since 2012. For suggestions and updates, you can reach him at [email protected] or on Twitter at @andrewmartonik.

Read More

Should All Government IT Systems Be Using Open Source Software?

binspamdupenotthebestofftopicslownewsdaystalestupid
freshfunnyinsightfulinterestingmaybe
descriptive

106462354
story


Government

Open Source

Linux


Should All Government IT Systems Be Using Open Source Software? (linuxjournal.com)






Posted
by

EditorDavid

from the world-domination dept.

Writing at Linux Journal, Glyn Moody reports that

dozens of government IT systems are switching to open source software

.


“The fact that this approach is not already the norm is something of a failure on the part of the Free Software community…”

One factor driving this uptake by innovative government departments is the potential to cut costs by avoiding constant upgrade fees. But it’s important not to overstate the “free as in beer” element here. All major software projects have associated costs of implementation and support. Departments choosing free software simply because they believe it will save lots of money in obvious ways are likely to be disappointed, and that will be bad for open source’s reputation and future projects.




Arguably as important as any cost savings is the use of open standards. This ensures that there is no lock-in to a proprietary solution, and it makes the long-term access and preservation of files much easier. For governments with a broader responsibility to society than simply saving money, that should be a key consideration, even if it hasn’t been in the past…. Another is transparency. Recently it emerged that Microsoft has been gathering personal information from 300,000 government users of Microsoft Office ProPlus in the Netherlands, without permission and without documentation.

He includes an inspiring quote from the Free Software Foundation Europe about code

produced

by the government: “If it is public money, it should be public code as well. But when it comes to the larger issue about the general usage of proprietary vs. non-proprietary software — what do Slashdot’s readers think?


Should all government IT systems be using open source software?


A rock store eventually closed down; they were taking too much for granite.

Working…

Read More

InReach Ventures, the ‘AI-powered’ European VC, closes new €53M fund

InReach Ventures, the so-called “AI-powered” venture capital firm based in London, is announcing the first closing of a new €53 million fund targeting early-stage European technology companies — surpassing the original fund target of €50 million, apparently.

Founded by former Balderton Capital General Partner Roberto Bonanzinga, along with Ben Smith (former U.K. Engineering Director at Yammer) and John Mesrie (former General Counsel at Balderton Capital), InReach set out in 2015 to use technology to help scale VC, especially across Europe’s idiosyncratic and highly fragmented market.

The firm’s proprietary software-based approach, which is underpinned by machine learning, claims to be able to generate and evaluate deal-flow more efficiently than traditional venture firms that mostly employ human VCs alone — although, admittedly, practically every VC firm is underpinned by some eliminate of data science and/or technology these days. Berlin’s Fly VC is another machine learning-enabled early-stage VC that comes to mind.

However, InReach certainly appears to be putting its money where its mouth is, disclosing that it has invested over €3 million in the development of its software, codenamed “DIG”. To back this up, Bonanzinga tells me the firm employs “more software engineers than investors”. (I saw an early demo of the software a couple of years ago and even then it seemed legit.)

Regards the new fund, Bonanzinga says InReach is targeting the most promising and innovative startups across Europe, primarily in the areas of consumer internet, software as a service and marketplaces. “We are geographically agnostic and will invest in companies anywhere in Europe, from Helsinki to Barcelona, from Warsaw to Rome,” he says. “In most cases we will be the first institutional investors and our first cheques will be between €500,000 and €2 million”.

To date, InReach Ventures has invested in eight startups from across Europe. They include Oberlo (Lithuania), which was subsequently acquired by Shopify, Soldo (Italy/UK), Tutorful (U.K.), Shapr3D (Hungary), Traitly (Sweden) and Loot (Germany).

Below follows a lightly edited Q&A with Bonanzinga on the new fund, how AI can be used to scale venture capital, and why machines won’t put VCs out of a job entirely any time soon.

TC: You have often said that venture capital doesn’t scale, especially across a fragmented market like Europe, but what do you mean by this?

RB: People get very excited about ecosystems but the data shows that startups can come from anywhere; the big technology hubs or more remote locations. This is carried through to Europe’s largest exists: from Betfair in London to Zalando in Berlin, from Supercell and Spotify in the Nordics, to Critio in France and Yoox in Italy, and so on. So not only is deal sourcing fragmented across Europe, but so are the returns.

Traditional ventures firms have looked to manage this fragmentation by throwing people at the problem, but if you want true coverage you need to have a presence in every city in Europe. This is how you need to think of our technology platform, as like having a highly trained associate in every city and town across the whole of Europe, providing structured diligent deal-flow. With this data/technology driven approach we can be truly pan-European at the early-stage, even as the first institutional investor on the cap-table.

TC: A lot of VCs say they use technology to help find or manage deal-flow, how is InReach any different?

RB: Many venture firms talk about data and software. Lately, it has become a hot topic in pitches to limited partners. I predict a new hype: the rush of needing to check the box of “we have a data strategy”. We will have many firms with 30+ investment professionals and a data engineer in a corner. The real question is how many firms are willing to transform their professional service DNA into a product DNA? As always, this is more of a people/organisational question, rather than a question simply of the use of technology.

Take a look at InReach, we are a very atypical founding team for a venture firm. In particular, Ben Smith comes from a software engineering background and has built many data platforms and product development teams (most recently at Yammer/Microsoft). The majority of the people at InReach are software engineers. This is the only Venture Firm we know in which there are more software engineers than investors! So far we have invested over €3m in developing our proprietary technology platform.

TC: Without giving away your secret sauce, how does the InReach platform work, both in terms of the machine learning/feedback loop or the signals/data you plug into it?

RB: From a technology perspective, our logical architecture is primarily based on 3 distinct layers: data, intelligence, and workflow. The data layer is a mix of massive data aggregation, with deep data enhancement, including the generation of a large set of original data. The intelligence layer makes sense of these millions of data points through an ensemble of machine learning algorithms, ranging in complexity from simple rules to advanced networks. Given this data-driven approach and the significant deal-flow this generates, we invest heavily in building a workflow product which allows us to efficiently process thousands of companies each month.

TC: You say the final investment decision is still made by humans: why is that and do you think this will always be the case?

RB: As with any AI company, it’s all about data. We have spent the past 3 years aggregating data from across the internet and building algorithms to provide us with significant dealflow. Much more crucially, we have been collecting and generating our own proprietary data-set of investment decisions and how these startups grow and adapt over time. Clearly this will only get more powerful.

However, especially at this early-stage, so much of the investment decision is based on the founders and what we call the DNA fit of the founders and the problem they are trying to solve. Some of this can be encoded in algorithms and learnt by AI, but there are still intangibles that ultimately require that we ask the question: do we enjoy spending time together?

RB: What has been the reaction by under the radar founders when they are discovered really early via InReach’s software?

RB: The first question is always ‘How did you find out about us?’. Once we explain what we do and how the platform works we create an immediate connection with the entrepreneur. This is exactly what happened when we reached out to 5 entrepreneurs in Vilnius who had started a company called Oberlo. Over the following year, we helped them grow and expand to 30 people across both Vilnius and Berlin, prior to their acquisition by Shopify.

We are taking a very entrepreneurial approach to investing; we run InReach more as a product development organisation, rather than a professional services firm, so we look and feel native to the entrepreneurs we talk to. We try to share our experiences and current-best-practices through the company building process, whether it be OKRs, different agile development methodologies, product roadmaps, etc.

Reaching out to promising entrepreneurs early is not the only advantage that DIG gives us. We are also very efficient and responsive when analysing inbound opportunities. In fact, if you look at our website, we optimize our website to convert visitors to share their startup with us. We are not concerned with being bombarded by opportunities because we have developed a scalable workflow that allows us to efficiently manage significant dealflow.

Read More

Electric, the startup that automates IT, raises $25 million from GGV

Electric.ai, the New York-based startup that offers chat-based IT support, has announced the close of a $25 million Series B round led by GGV. As part of the deal, partner Jeff Richards will be joining the board.

Founder Ryan Denehy launched Electric in 2016. Previously, he’d run two startups that were sold to USA Today Sports and Groupon, respectively, where he realized that all of the simplicity that came with using a service like Zenefits simply didn’t exist in the IT world.

“It was all local service providers, and they all charge way too much money,” said Denehy. “I thought ‘this is so nuts!’ Companies are using more and more technology every day.”

With his second startup, Swarm, he saw even more clearly how big of a problem this was as the company sold a product that required hardware installation at retailers.

“We were building a company on top of local IT providers, and I saw up close and personal how difficult it was and how fragmented the industry was.”

And so, Electric was born.

The premise is relatively simple. Most of IT’s tasks focus on administration, distribution and maintenance of software programs, meaning that the individual IT specialist doesn’t necessarily need to be desk-side troubleshooting a hardware issue.

Companies using Electric simply install its software on every corporate laptop, giving the top IT employee or the org’s decision-maker a bird’s-eye view of the lay of the land. They can grant and revoke permissions, assign roles and make sure everyone’s software is up to date. By integrating with the APIs of the top office software programs, like Dropbox and G Suite, most of the day-to-day tasks of IT can be handled through Electric’s dashboard.

This leaves IT professionals time to focus on actual troubleshooting, hardware installation, etc.

For startups that haven’t yet hired an IT person, Electric connects startups that need help with installation or in-person troubleshooting with local vendors.

Electric says it has automated around 40 percent of IT tasks, with plans to automate 80 percent of IT tasks over 2019.

The company currently has around 300 customers, which rounds out to about 10,000 total users, and serves 10 U.S. markets, including New York, San Francisco, Boston, Chicago and Austin, among others.

The new funding brings Electric’s total funding amount to $37.3 million.

Read More

Software Executive Exploits ATM Loophole To Steal $1 Million

binspamdupenotthebestofftopicslownewsdaystalestupid
freshfunnyinsightfulinterestingmaybe
descriptive

106347954
story


Security

Software

The Almighty Buck


Software Executive Exploits ATM Loophole To Steal $1 Million (zdnet.com)






Posted
by

BeauHD

from the sneaky-bastard dept.

An anonymous reader quotes a report from ZDNet:

A Chinese software manager has been sentenced after being found guilty of stealing approximately $1 million from Huaxia Bank ATMs containing security weaknesses. The 43-year-old former manager employed in Huaxia Bank’s software and technology development center spotted a “loophole” in the bank’s core operating system which offered an unrecorded timeframe in which to make withdrawals, as reported by the South China Morning Post. Qin Qisheng realized that cash withdrawals made close to midnight were not recorded by the bank’s systems in 2016, and in the same year, began systematically abusing the glitch.





Qin wrote a number of scripts which, once implanted in the bank’s software, allowed him to probe the loophole without raising suspicion. It appears these tests were successful as the software chief then made withdrawals for over a year of between $740 and $2,965, the publication says. The money had to come from somewhere, and so Qin used a “dummy account” established by the bank for testing purposes. In total, Chinese law enforcement says that the former manager was able to steal over seven million yuan, equivalent to roughly $1 million. Huaxia Bank eventually uncovered the scheme, which Qin attempted to explain away as “internal security tests.” When it came to the money, the software manager said the funds were simply “resting” in his own account but were due to be returned to the bank.

The financial institution accepted his explanation and fixed the problem, but law enforcement didn’t and arrested him for theft in December 2018. Qin was given a jail term of ten and a half years, and on appeal, the sentence was upheld.


In seeking the unattainable, simplicity only gets in the way.
— Epigrams in Programming, ACM SIGPLAN Sept. 1982

Working…

Read More

Torc Robotics and Transdev are launching autonomous shuttles to deliver people to public transit

Self-driving technology company Torc Robotics is partnering with Transdev, the public transportation giant, to deploy fully autonomous electric shuttles designed to provide free connections to existing transit like trains and buses.

The companies, which made the announcement Monday at CES 2019 in Las Vegas, are integrating Torc’s self-driving software stack and sensor suite into an autonomous shuttle known as i-Cristal that was unveiled earlier this year by Transdev and French manufacturer Lohr. Torc is licensing its Asimov self-driving software and sensor suite to Transdev.

The shuttles, which can seat up to 16 passengers, will operate in a dedicated lane offering a shared-ride mobility service at night and off-peak hours between the Massy transit station and the Paris-Saclay campus. Another autonomous shuttle service will operate on public roads offering a shared-ride mobility service throughout the business park and connecting to the tramway station in Rouen.

The partners are testing on closed courses and public roads before launching the public service trials in Paris-Saclay and Rouen.

The aim is integrate autonomous shuttles into Transdev’s public transportation networks, which are considerable. The public transportation company operates in 20 countries and its transit services provide 11 million passenger trips per day.

“At Transdev, we believe the future of mobility is increasingly P.A.C.E.: Personalized, Autonomous, Connected and Eco-Friendly,” Yann Leriche, Transdev’s North America CEO and head of autonomous transportation systems said in a statement. “We believe that public transport will lead and be the first place real autonomous services will be developed.”

The electric i-Cristal shuttles have Level 4 autonomous capabilities, a designation by the SAE that means these vehicles are able to operate fully autonomously in certain conditions or geographic areas. The shuttles, which operate without a steering wheel or pedals, can travel up to 19 miles per hour.

Torc Robotics’ specialty has been in automated heavy machinery and commercial equipment. But the company shifted its attention to consumer products in recent years. The company has integrated its Asimov self-driving car technology into Lexus RX and Chrysler Pacifica vehicles. Torc says it has tested these vehicles in more than 20 U.S. states while operating on both public roads and closed courses. 

Last year, at CES 2018, Torc announced a partnership with AAA to work on a set of safety criteria for using self-driving cars.

Read More

Amazon May Be Forcing Its Sellers to Contribute to Its Facial Recognition Program

In this Sept. 13, 2018, file photo Jeff Bezos, Amazon founder and CEO, speaks at The Economic Club of Washington’s Milestone Celebration in Washington.
Photo: Cliff Owen (AP)

Amazon has consistently faced ongoing outcry over its contentious Rekognition software, but that apparently isn’t stopping the company from testing out facial recognition technology on its sellers.

BuzzFeed News reported Wednesday that an individual in Vietnam claimed that while attempting to create a seller profile, the company prompted him to grant access to Amazon to his webcam and provide a clip of his face. More troubling, however, is that the individual claimed he was not able to opt out of the prompt and that after complying, he could not locate the video in his profile.

In a screenshot of the apparent prompt provided to BuzzFeed News, the individual was told to provide the company with access to his webcam so that Amazon could “record a 5-second video of your face” that it said would be encrypted. According to BuzzFeed News, the company failed to answer some pretty big questions about the tool’s potential implications. Per the report:

Amazon declined to explain why or when it began asking some sellers for video proof of identity, in what regions it requests that proof, and what it does with the seller videos it records. The Seattle-based tech giant also would not say if the videos are processed by its Rekognition facial recognition technology, if a seller can remove video proof of identity from Amazon’s servers, and whether or not it has updated its seller agreements and privacy policies to address the collection and storage of biometric data.

Reached for comment about the reported feature, an Amazon spokesperson told Gizmodo in a statement by email that the company “is always innovating to improve the seller experience.”

The measure appears to be an effort to stem the creation of multiple seller accounts, which the company does not allow (though Amazon does allow sellers to apply for an assurances about the technology were undermined by Rekognition’s only known law enforcement customer, the Washington County Sheriff’s Office in Oregon, which told Gizmodo that it does “not set nor do we utilize a confidence threshold.” Even as researchers and watchdogs have raised serious concerns about the technology and its questionable results, Amazon has said that with respect to police work, “we guide customers to set a higher threshold of at least 95% or higher.”

While it’s not clear whether Amazon is using Rekognition for seller verification, it’s still unsettling that the company is squirreling away video clips of people’s faces for purposes unknown. And any uncertainty about how the company is using sellers’ biometric data would certainly not, as the company said, seem to “improve the seller experience.”

[BuzzFeed News]

Read More

Razer integrates Amazon’s Alexa voice controls and haptic feedback into its gaming platform

Razer, the company that makes high-end hardware and software specifically tailored to gaming enthusiasts, is adding new voice and touch features to its platform to bring it into the next generation of computing and take the Razer gaming experience to the next immersive level. Today at CES, the company announced that it will be integrating Amazon’s Alexa into its gaming platform to let users control certain aspects of the Razer gaming experience by voice; and it also announced a new range of devices it’s calling HyperSense, to provide haptic interfaces for its gamers.

The Alexa integration will start to be rolled out in Q2 2019, while the HyperSense ecosystem is only getting previewed with no launch date at this stage.

Razer is also making some strides in its efforts to expand the ubiquity of its ecosystem to more than just Razer products: the company said that its Chroma Connected Devices Program — which brings in a new range of peripherals that can work with Razer machines — now has 15 new partners and covers 300 different devices that can run Chroma-enabled games and apps.

The Alexa integration is a signal of how, while Amazon has yet to build its own dedicated gaming hardware, it has been making some headway into that consumer sector regardless among the 100 million devices that now work with the voice assistant. Last September, Microsoft announced that Alexa would work with the Xbox One, the first big gaming console announcement for Alexa after making a little headway with Sony and the PlayStation Vue a year before.

As with those two, it looks like the Alexa integration with Razer is more around controlling what happens around the game — you will be able to control voice-control lighting effects, device settings and so on, but not in-game actions themselves — although you might consider that in-game controls could be the next step — perhaps one that Amazon would prefer to make itself, exploring synergies between existing Amazon assets like its Twitch video network for gamers, its games apps, its immersive AI-based experiences that lean on augmented reality, and its growing family of Echo devices, some with screens.

“We’re thrilled to work with Razer and provide customers a first-of-its kind integration that showcases how Alexa can enhance the gaming experience,” said Pete Thompson, VP of the Alexa Voice Service, in a statement. “With Alexa, users can control compatible Razer peripherals while taking full advantage of other Alexa capabilities, including the ability to manage smart devices, access tens of thousands of skills and more.”

In the case of Razer, the company is bringing Alexa into its ecosystem by way of its Razer Synapse 3 Internet-of-Things platform, which it uses to connect up Razer and third-party peripherals that a user might have set up to play.

As Razer describers it, those wearing Razer headsets and mics can then be used to control compatible devices, such as in-game lighting, mice, keyboards and headsets. 

“This is an amazing look forward for Razer into a future for gamers where the full potential of gaming gear is seamless and intuitively controlled through voice activation, synchronization and connected cloud services,” said Razer Co-Founder and CEO Min-Liang Tan (pictured above) in a statement.

The haptic developments, meanwhile, will also come by way of a partnership with third parties — in this case, two companies called Lofelt and Subpac, as well as others that Razer is not disclosing.

As with other haptic systems, the idea with HyperSense is not that die-hard gamers will start installing wind machines, chillers and strange smells in their living rooms, but that Razer wants to build and work with others to create, for example, high-fidelity speakers and touch boards that will give users the sensation of different experiences in a way that will bring them even closer into the action of the game. (If you think this sounds closer and closer to Ready Player One, you’re not alone.)

So in the case of HyperSense, cues might include specific sound cues in games like rocket firing, or wind, which will might get “played out” in the form of sound waves that you can feel in your feet or — if you might imagine — a connected jacket that will suddenly make you colder, lean to one side, and shudder with the gust.

“We are finally able to feel what we see and hear all around using the gaming arena, sensing the hiss of enemy fire or feeling the full bass of a monster’s growl,” said Min. “Much like Razer Chroma where we have demonstrated the power of a connected lighting system across gaming devices, Razer HyperSense syncs gaming devices equipped with high-fidelity haptic motors to enhance immersion in gaming.”

In the case of both the voice and touch-based features, it makes perfect sense to bring both to Razer and other games platforms. The wider trend in the gaming industry has been to use advances in technology to make the experience more authentic feeling. Up to now that has taken the form of better graphics and audio, tapping into VR and AR, and playing against and with other people instead of just the machine. Touch is one that hasn’t really been touched (sorry) much up to now, but as we start to see more haptic bells and whistles on devices like our smartphones, it’s logical that it should come to games, too.

Voice, meanwhile, has been shaping up for a long time now as the next big interface, and in cases where your hands and attention might otherwise be occupied, having a voice interface can be indispensable — for example, if you are lost while driving and need to reset your car’s navigation. Games don’t have that kind of urgency — not in the real-world sense, at least — but it seems like just a matter of time before we see games designers and console makers improving the overall experience by letting people speak naturally to move through the action with a scream or even a calm request to turn down the volume a bit.

Read More

Luna Display updates its video engine for faster performance

Astro, the company behind Luna Display and Astropad, is releasing a major software update that will drastically improve performance. According to the company’s own testing, you should expect as much as a 100 percent performance increase when it comes to latency and refresh rate.

Luna Display lets you use your iPad as a second monitor for your laptop. For instance, if you’re traveling and you can’t get any work done without an external display, you can use Luna Display to move macOS windows across your laptop display and your iPad.

Some people have also been using it with a home server. For instance, you can use Luna Display to control a Mac Mini using an iPad, a wireless keyboard and a wireless mouse. You’re no longer tied to a desk.

Compared to similar apps, Luna Display relies on a hardware dongle. This tiny USB-C or Mini DisplayPort device emulates a display. In your Mac settings, it looks like you plugged in a standard display even though it’s just a tiny key.

Astropad is a separate app for creative professionals. It lets you mirror your Mac display and use Photoshop with your Apple Pencil. They both rely on the same rendering engine.

And today’s update is all about performance. Thanks to a bunch of optimizations, you get an average latency of 11.3 ms when you use one of those apps with a 13-inch MacBook Pro, an 11-inch iPad Pro and a USB cable. Over Wi-Fi, you get a latency of 22.4 ms.

When it comes to frame rate, it’s a bit harder to quantify. But Astro has compared its products with competing solutions and thinks you have a higher chance of hitting 60 frames per second using Astro’s products.

Astro has compared Luna Display with Duet Display and Air Display. And it’s interesting to see that the company reports better performance than Duet.

Duet recently released an update to take advantage of hardware acceleration. At the time, Duet claimed that its solution was faster and cheaper than Luna Display. It’s clear that this space is moving quickly, and the result is better apps for everyone.

Read More