By continuing to use the site or forum, you agree to the use of cookies, find out more by reading our GDPR policy

Android is the world’s most popular smartphone operating system, running on billions of smartphones around the world. As a result, even the tiniest of changes in the OS has the potential to affect millions of users. But because of the way that Android updates are delivered, it’s debatable whether these changes actually make a difference. Despite that, we’re always looking forward to the next big Android update in hopes that it brings significant change. Speaking of which, the first developer preview for the next major update, Android 12, is right around the corner, and it can bring about many improvements. In case you missed our previous coverage, here’s everything we know about Android 12 so far. Android 12 will first make an appearance as Developer Preview releases. We expect to get a couple of these, with the first one, hopefully landing on Wednesday, 17th February 2021. The Developer Preview for Android 11 began in February 2020, a few weeks ahead of the usual release in March, which gave developers more time to adapt their apps to the new platform behaviors and APIs introduced in the update. Since the COVID-19 pandemic hasn’t completely blown over in several parts of the world, we expect Google to follow a longer timeline this year as well. As their name implies, the Android 12 Developer Previews will allow developers to begin platform migration and start the adaption process for their apps. Google is expected to detail most of the major platform changes in the previews to inform the entire Android ecosystem of what’s coming. Developer Previews are largely unstable, and they are not intended for average users. Google also reserves the right to add or remove features at this stage, so do not be surprised if you see a feature in the first Developer Preview missing in the following releases. Developer Previews are also restricted to supported Google Pixel devices, though you can try them out on other phones by sideloading a GSI. After a couple of Developer Preview releases, we will make our way to Android 12 Beta releases, with the first one expected either in May or June this year. These releases will be a bit more polished, and they will give us a fair idea of what the final OS release will look like. There may also be minor releases in between Betas, mainly to fix any critical bugs. Around this time we will also start seeing releases for devices outside of the supported Google Pixel lineup. OEMs will start migrating their UX skins to the Beta version of Android 12 and they will begin recruitments for their own “Preview” programs. However, these releases may lag a version behind the ones available on the Google Pixel. Again, bugs are to be expected in these preview programs, and as such, they are recommended only for developers and advanced users. After a beta release or two, the releases will achieve Platform Stability status co-existing alongside the Beta status. This is expected to happen around July-August this year. Platform Stability means that the Android 12 SDK, NDK APIs, app-facing surfaces, platform behaviors, and even restrictions on non-SDK interfaces have been finalized. There will be no further changes in terms of how Android 12 behaves or how APIs function in the betas that follow. At this point, developers can start updating their apps to target Android 12 (API Level 31) without being concerned about any unexpected changes breaking their app behavior. After one or two beta releases with the platform stability tag, we can expect Google to roll out the first Android 12 stable release. This is expected to happen in late-August or September. As is the case, Google’s Pixel devices are expected to be the first to get Android 12 stable releases. For non-Pixel phones, we expect to see wider public betas at this stage. The exact timeline for the same will depend upon your phone and its OEM’s plans. A good rule of thumb is that flagships will be prioritized for the update, so if you have a phone that is lower down the price range, you can expect to receive the update a few weeks or months down the line. The complete 2 part report is posted on OUR FORUM.

Our thoughts are private – or at least they were. New breakthroughs in neuroscience and artificial intelligence are changing that assumption, while at the same time inviting new questions around ethics, privacy, and the horizons of brain/computer interaction. Research published last week from Queen Mary University in London describes an application of a deep neural network that can determine a person’s emotional state by analyzing wireless signals that are used like radar. In this research, participants in the study watched a video while radio signals were sent towards them and measured when they bounced back. Analysis of body movements revealed “hidden” information about an individual’s heart and breathing rates. From these findings, the algorithm can determine one of four basic emotion types: anger, sadness, joy, and pleasure. The researchers proposed this work could help with the management of health and wellbeing and be used to perform tasks like detecting depressive states. Ahsan Noor Khan, a Ph.D. student and first author of the study, said: “We’re now looking to investigate how we could use low-cost existing systems, such as Wi-Fi routers, to detect emotions of a large number of people gathered, for instance in an office or work environment.” Among other things, this could be useful for HR departments to assess how new policies introduced in a meeting are being received, regardless of what the recipients might say. Outside of an office, police could use this technology to look for emotional changes in a crowd that might lead to violence. The research team plans to examine the public acceptance and ethical concerns around the use of this technology. Such concerns would not be surprising and conjure up a very Orwellian idea of the ‘thought police’ from 1984. In this novel, the thought police watchers are experts at reading people’s faces to ferret out beliefs unsanctioned by the state, though they never mastered learning exactly what a person was thinking. This is not the only thought technology example on the horizon with dystopian potential. In “Crocodile,” an episode of Netflix’s series Black Mirror, the show portrayed a memory-reading technique used to investigate accidents for insurance purposes. The “corroborator” device used a square node placed on a victim’s temple, then displayed their memories of an event on the screen. The investigator says the memories: “may not be totally accurate, and they’re often emotional. But by collecting a range of recollections from yourself and any witnesses, we can help build a corroborative picture.” If this seems farfetched, consider that researchers at Kyoto University in Japan developed a method to “see” inside people’s minds using an fMRI scanner, which detects changes in blood flow in the brain. Using a neural network, they correlated these with images shown to the individuals and projected the results onto a screen. Though far from polished, this was essentially a reconstruction of what they were thinking about. One prediction estimates this technology could be in use by the 2040s. Brain-computer interfaces (BCI) are making steady progress on several fronts. In 2016, research at Arizona State University showed a student wearing what looks like a swim cap that contained nearly 130 sensors connected to a computer to detect the student’s brain waves. The student is controlling the flight of three drones with his mind. The device lets him move the drones simply by thinking directional commands: up, down, left, right. Flying drones with your brain in 2019. Source: University of Southern FloridaAdvance a few years to 2019 and the headgear is far more streamlined. Now there are brain-drone races. Besides the flight examples, BCIs are being developed for medical applications. MIT researchers have developed a computer interface that can transcribe words that the user verbalizes internally but does not actually speak aloud. Visit OUR FORUM for more.

When Nvidia launched its RTX A6000 48GB professional graphics card last October, the company said that it would offer at least twice the performance of the company's previous-gen Quadro cards. These types of claims are not unusual, but how fast is the $4,650 RTX A6000 really in real-world benchmarks? (Interestingly, that's only $650 more than Galax's flagship RTX 3090 GPU.) Workstation maker Puget Systems decided to find out and ran multiple professional-grade benchmarks on the card.  Nvidia's RTX A6000 48GB graphics card is powered by its GA102 GPU with 10,752 CUDA cores, 336 tensor cores, and 84 RT cores, and a 384-bit memory bus that pairs the chip with a beefy 48GB slab of GDDR6 memory. In contrast, Nvidia's top-of-the-range GeForce RTX 3090 consumer board based on the same graphics processor features a different GPU configuration containing 10,496 CUDA cores, 328 tensor cores, 82 RT cores, and a 384-bit memory interface for its 'mere' 24GB of GDDR6X memory. While the Nvidia RTX A6000 has a slightly better GPU configuration than the GeForce RTX 3090, it uses slower memory and therefore features 768 GB/s of memory bandwidth, which is 18% lower than the consumer graphics card (936GB/s), so it will not beat the 3090 in gaming. Meanwhile, because the RTX A6000 has 48GB of DRAM on board, it will perform better in memory-hungry professional workloads. While all GeForce RTX graphics cards come with Nvidia Studio drivers that support acceleration in some professional applications, they are not designed to run all professional software suites. In contrast, professional ISV-certified drivers of the Quadro series and Nvidia RTX A6000 make them a better fit for workstations. Not all professional workloads require enormous onboard memory capacity, but GPU-accelerated rendering applications benefit greatly, especially when it comes to large scenes. Since we are talking about graphics rendering, the same programs also benefit from GPU capabilities. That said, it is not surprising that the Nvidia RTX A6000 48GB outperformed its predecessor by 46.6% ~ 92.2% in all four rendering benchmarks ran by Puget. Evidently, V-Ray 5 scales better with the increase of GPU horsepower and onboard memory capacity, whereas Redshift 3 is not that good. Still, the new RTX A6000 48GB is tangibly faster than any other professional graphics card in GPU-accelerated rendering workloads. Modern video editing and color correction applications, such as DaVinci Resolve 16.2.8 and Adobe Premiere Pro 14.8, can also accelerate some of the tasks using GPUs. In both cases, the Nvidia RTX A6000 48GB offers tangible performance advantages compared to its predecessor, but its advantages look even more serious when the board is compared to graphics cards released several years ago. Like other modern professional graphics applications, Adobe After Effects and Adobe Photoshop can take advantage of GPUs. Yet, both programs are CPU bottlenecked in many cases, which means that any decent graphics processor (and not necessarily a professional one) is usually enough for both suites. Nonetheless, the new Nvidia RTX A6000 64GB managed to show some speed gains compared to the predecessor in these two apps as well. More facts and figures along with possible pricing can be found on OUR FORUM.