How to use trigger to control android with gestures

The new features, called Camera Switches and Project Activate, let users navigate their devices without their hands or voice.

Video producer / CNET

Abrar Al-Heeti is a video producer for CNET, with an interest in internet trends, entertainment, pop culture and digital accessibility. She graduated with bachelor’s and master’s degrees in journalism from the University of Illinois at Urbana-Champaign. Abrar was named a Tech Media Trailblazer by the Consumer Technology Association in 2019, and a winner of SPJ NorCal’s Excellence in Journalism Awards in 2022. Though Illinois is home, she now loves San Francisco — steep inclines and all.

Project Activate can trigger customized actions on your phone, including sending messages, playing audio files and speaking short phrases.

Google is rolling out a few new accessibility features for Android users, including the ability to control your phone and communicate using facial gestures, the company said Thursday. It also rolled out an update to Lookout , which uses a person’s phone camera to identify objects and text in the physical world.

The first update, called Camera Switches, detects facial gestures using your phone’s camera. Users can choose from six gestures — look right, look left, look up, smile, raise eyebrows or open your mouth — to navigate their phone. They can also assign gestures to carry out tasks like open notifications, go back to the home screen or pause gesture detection.

Here’s a look at Camera Switches in action.

Camera Switches is an update to Switch Access, an Android feature launched in 2015 that lets people with limited dexterity more easily navigate their devices using adaptive buttons called physical switches. Now, with Camera Switches, users can scan and select items on their phone without using their hands or voice. The new feature can be used alongside physical switches.

Camera Switches also allows users or their caregiver to choose how long to hold a gesture and how big it needs to be for the phone to detect it. To use the feature, open your phone’s settings, select Accessibility, and then tap Switch Access (under Interaction Controls). Turn it on and grant permissions. You can also download the app from the Play Store. (Here’s a video on how to set up Camera Switches.)

Additionally, a new Android app called Project Activate lets people use those same facial gestures from Camera Switches to activate customized actions using a single gesture, like saying a preset phrase, sending a text and making a phone call.

For instance, someone could use Project Activate to answer yes or no to a question, ask for a moment to type something into a speech-generating device or to send a text to ask someone to come to them.

Project Activate lets you customize which facial gestures you want to use for different actions.

The app is customizable, ranging from the actions you want to activate to the facial gestures you want to use. Project Activate is available in the US, UK, Canada and Australia in English. You can download it from the Google Play store.

Lastly, Google rolled out an update to Lookout, which launched in 2019 and helps people who are blind or low-vision identify food labels, pinpoint objects in a room and scan US currency. Last year, the search giant expanded Lookout by adding a Scan Document mode to capture text on a page. Now, that feature can also read handwritten text. It currently supports handwriting in Latin-based languages, with more language compatibility coming soon, Google said. Additionally, Lookout’s currency mode now recognizes Euros and Indian Rupees, in addition to US dollars. More currencies will be added, the company said.

As we wait for an Android 13 preview, details for the upcoming release trickle in. On top of recently spotted features like the tap-to-transfer media playback, some UI tweaks for the output picker, QR code scanning tweaks, and a “Panlingual” feature for per-app language settings, we have a few lingering tidbits for you. On top of those, Android 13 could debut a tweak for the Assistant trigger in button-based navigation modes, and a user switcher menu may be coming to the keyguard.

Home button Assistant toggle

Gesture-based navigation may be the hot thing right now, but plenty of folks still use the older button-based system. And that’s not just because of habit; buttons are better for accessibility, too. However, Google looks like it may be adding an option to tweak how that button-based navigation works, adding a setting that disables the home button long press from triggering your digital assistant.

Frankly, I had assumed a setting like this would have existed already, but digging around current Android 12 releases, there’s no such option. This new setting adds a cog menu to the two- and three-button navigation settings in the system navigation menu (accessed in Android 12 for Pixels via System -> Gestures -> System navigation), similar to the menu that already existed for gesture navigation settings. However, this has just one item: Hold Home to invoke assistant. Seemingly enabled by default, you can turn the setting off, so a long-press of the home button no longer triggers your digital assistant.

This might not seem like a terribly helpful change, but it could actually be handy for accessibility. Some of us may lack the coordination or precision in motion to tap a button reliably without unintentionally triggering a longer press, and a change like this can make it easier to navigate around Android in those cases with just a small sacrifice in functionality. (It might even be good for Google to link to it from the Accessibility menu.)

Keyguard user switcher

Most of us probably don’t use Android’s user switcher very often — phones tend to be single-user devices, and Android-powered tablets aren’t very popular (though maybe Android 12L can make a dent in that). Still, technically you can set up multiple user profiles on an Android device and switch between them. They could be work accounts, family, or your friends, and the separation ensures a little added privacy and some freedom for individual customization.

Switching users isn’t exactly a pain, and there are a few ways to do it (there’s a Quick Settings tile/shortcut on phones, though tablets get a snazzy menu right on the lock screen), but Android 13 is adding another method: The keyguard itself might get a user switcher menu.

Accessibility shortcuts are a quick way to turn on accessibility apps or switch between apps. For each accessibility app, you can choose the shortcut that you want to use.


Step 1: Set up accessibility shortcuts

You can set up as many shortcuts as you like for the accessibility apps that you use on your Android device.

  1. On your device, open the Settings app.
  2. Tap Accessibility.
  3. Select the app that you want to use with a shortcut.
  4. Select the shortcut setting, like TalkBack shortcut or Magnification shortcut.
  5. Choose a shortcut:
    • Tap accessibility button: Tap the Accessibility button on your screen.
    • Hold volume keys: Touch and hold both volume keys.
    • Two-finger swipe up from bottom, or three-finger swipe if TalkBack is on.
    • Triple-tap screen, available for Magnification only.
  6. Tap Save.

Optional: Change device navigation to buttons or gestures

On many devices, you can choose between 3-button navigation and gesture navigation.

  1. On your device, open the Settings app.
  2. Select SystemGesturesSystem navigation.
  3. Choose your new navigation option.

Shortcuts for navigation types

There are different shortcuts available for the 2 navigation types:


    • Accessibility button in the navigation bar
    • Floating accessibility button
    • Floating accessibility button
    • Gesture navigation
      • Two-finger swipe up
      • If Talkback is on, 3-finger swipe up

      Step 2: Use accessibility shortcuts

      After you turn on accessibility shortcuts, you can use them to start your accessibility apps or switch between accessibility apps. Read the tips below for each shortcut.

      Tap accessibility button

      • Start an app: In your navigation bar, tap Accessibility .
      • Switch between apps: If you’ve assigned more than one app to use the accessibility button, touch and hold Accessibility . In the menu, select the new app.

      Tap floating accessibility button

      • Start an app or switch between apps: Tap the floating accessibility button.
      • Move the floating accessibility button: Drag and drop the floating accessibility button.
      • Resize the floating accessibility button: Touch and drag the floating accessibility button towards the edge of the screen to make it smaller.

      Change floating accessibility button settings

      1. On your device, open the Settings app.
      2. Select AccessibilityAccessibility button.
        • To choose whether you use the floating button: Under “Location”, select Floating over other apps.
        • To change the size of the buttons: Select Size and choose a new size.
        • To set transparency and fade: Use the slider to set transparency when not in use.

      Two-finger swipe up from bottom, or, if TalkBack is on, three-finger swipe

      • Start an app: From the bottom of the screen, swipe up with two fingers, or with three fingers if TalkBack is on.
      • Switch between apps: If you’ve assigned more than one app to use the accessibility button, swipe up with two fingers and hold, or with three fingers if TalkBack is on, then lift your fingers. In the menu, select the new app.

      Volume key shortcut

      • Start an app: Press and hold both volume keys until the menu appears and select the app you want to use.
      • Switch between apps: Press and hold both volume keys. When the shortcut menu opens, select the app that you want to use.
      • Choose which apps start with the volume key shortcut: Press and hold both volume keys. When the shortcut menu opens, select Edit shortcuts. Choose features to use with this shortcut, then tap Done.

      Developers: Add the accessibility button

      Learn how to use the Accessibility button in your accessibility service.

      Get help

      For more help with accessibility shortcuts, contact the Google Disability Support team.

      Every day, people use voice commands, like “Hey Google,” or their hands to navigate their phones. However, that’s not always possible for people with severe motor and speech disabilities.

      To make Android more accessible for everyone, we’re introducing two new tools that make it easier to control your phone and communicate using facial gestures: Camera Switches and Project Activate. Built with feedback from people who use alternative communication technology, both of these tools use your phone’s front-facing camera and machine learning technology to detect your face and eye gestures. We’ve also expanded our existing accessibility tool, Lookout, so people who are blind or low-vision can get more things done quickly and easily.

      How to use trigger to control android with gestures

      Camera Switches: navigate Android with facial gestures

      In 2015, we launched Switch Access for Android, which lets people with limited dexterity navigate their devices more easily using adaptive buttons called physical switches. Camera Switches, a new feature in Switch Access, turns your phone’s camera into a new type of switch that detects facial gestures. Now it’s possible for anyone to use eye movements and facial gestures to navigate their phone — sans hands and voice! Camera Switches begins rolling out within the Android Accessibility Suite this week and will be fully available by the end of the month. .

      You can choose from one of six gestures — look right, look left, look up, smile, raise eyebrows or open your mouth — to scan and select on your phone. There are different scanning methods you can choose from — so no matter your experience with switch scanning, you can move between items on your screen with ease. You can also assign gestures to open notifications, jump back to the home screen or pause gesture detection. Camera Switches can be used in tandem with physical switches.

      We heard from people who have varying speech and motor impairments that customization options would be critical. With Camera Switches, you or a caregiver can select how long to hold a gesture and how big it has to be to be detected. You can use the test screen to confirm what works best for you.

      An individual and their caregiver customize Camera Switches. The set up process, shown through a finger on the screen, showcases customization for the size of gestures and assigning the gesture to a scanning action.

      The changes are the result of two new features, ‘Camera Switches’ and ‘Project Activate’.

      How to use trigger to control android with gestures

      iStock Users can scan their phone screen and select a task by smiling, raising eyebrows, opening their mouth, or looking to the left, right or up.


      SAN FRANCISCO: Using a raised eyebrow or smile, people with speech or physical disabilities can now operate their Android-powered smartphones hands-free, Google said Thursday.

      Two new tools put machine learning and front-facing cameras on smartphones to work detecting face and eye movements.

      Users can scan their phone screen and select a task by smiling, raising eyebrows, opening their mouth, or looking to the left, right or up.

      “To make Android more accessible for everyone, we’re launching new tools that make it easier to control your phone and communicate using facial gestures,” Google said.

      The Centers for Disease Control and Prevention estimates that 61 million adults in the United States live with disabilities, which has pushed Google and rivals Apple and Microsoft to make products and services more accessible to them.

      “Every day, people use voice commands, like ‘Hey Google’, or their hands to navigate their phones,” the tech giant said in a blog post.

      “However, that’s not always possible for people with severe motor and speech disabilities.”

      The changes are the result of two new features, one is called “Camera Switches,” which lets people use their faces instead of swipes and taps to interact with smartphones.

      The other is Project Activate, a new Android application which allows people to use those gestures to trigger an action, like having a phone play a recorded phrase, send a text, or make a call.

      “Now it’s possible for anyone to use eye movements and facial gestures that are customized to their range of movement to navigate their phone – sans hands and voice,” Google said.

      The free Activate app is available in Australia, Britain, Canada and the United States at the Google Play shop.

      Apple, Google and Microsoft have consistently rolled out innovations that make internet technology more accessible to people with disabilities or who find that age has made some tasks, such as reading, more difficult.

      Voice-commanded digital assistants built into speakers and smartphones can enable people with sight or movement challenges to tell computers what to do.

      There is software that identifies text on web pages or in images and then reads it aloud, as well as automatic generation of captions that display what is said in videos.

      An “AssistiveTouch” feature that Apple built into the software powering its smart watch lets touchscreen displays be controlled by sensing movements such as finger pinches or hand clenches.

      “This feature also works with VoiceOver so you can navigate Apple Watch with one hand while using a cane or leading a service animal,” Apple said in a post.

      Computing colossus Microsoft describes accessibility as essential to empowering everyone with technology tools.

      “To enable transformative change accessibility needs to be a priority,” Microsoft said in a post.

      “We aim to build it into what we design for every team, organization, classroom, and home.”

      Use this feature to control functions without touching the screen.

      Before using this feature, make sure the Air Gesture feature is activated.

      How to use trigger to control android with gestures

      How to use trigger to control android with gestures

      How to use trigger to control android with gestures

      How to use trigger to control android with gestures

      Note: The device may not recognise your gestures if you perform them too far from the device or while wearing dark-coloured clothes, such as gloves.

      How to use trigger to control android with gestures

      Note: A pop up screen will be displayed advising you to enable this feature fully, enable at least one of the relevant functions.

      The Air Gesture sensor is positioned at the top right above the screen.

      How to use trigger to control android with gestures

      The sensor can recognize your gestures best at a distance of 7cm when you perform them at normal speed.

      When air gestures are available, the icon shown here will appear on the status bar at the top of the screen.

      This document explains how to listen for, and respond to, gestures in Flutter. Examples of gestures include taps, drags, and scaling.

      The gesture system in Flutter has two separate layers. The first layer has raw pointer events that describe the location and movement of pointers (for example, touches, mice, and styli) across the screen. The second layer has gestures that describe semantic actions that consist of one or more pointer movements.


      Pointers represent raw data about the user’s interaction with the device’s screen. There are four types of pointer events:

      PointerDownEvent The pointer has contacted the screen at a particular location. PointerMoveEvent The pointer has moved from one location on the screen to another. PointerUpEvent The pointer has stopped contacting the screen. PointerCancelEvent Input from this pointer is no longer directed towards this app.

      On pointer down, the framework does a hit test on your app to determine which widget exists at the location where the pointer contacted the screen. The pointer down event (and subsequent events for that pointer) are then dispatched to the innermost widget found by the hit test. From there, the events bubble up the tree and are dispatched to all the widgets on the path from the innermost widget to the root of the tree. There is no mechanism for canceling or stopping pointer events from being dispatched further.

      To listen to pointer events directly from the widgets layer, use a Listener widget. However, generally, consider using gestures (as discussed below) instead.


      Gestures represent semantic actions (for example, tap, drag, and scale) that are recognized from multiple individual pointer events, potentially even multiple individual pointers. Gestures can dispatch multiple events, corresponding to the lifecycle of the gesture (for example, drag start, drag update, and drag end):


      onTapDown A pointer that might cause a tap has contacted the screen at a particular location. onTapUp A pointer that will trigger a tap has stopped contacting the screen at a particular location. onTap The pointer that previously triggered the onTapDown has also triggered onTapUp which ends up causing a tap. onTapCancel The pointer that previously triggered the onTapDown will not end up causing a tap.

      Double tap

      onDoubleTap The user has tapped the screen at the same location twice in quick succession.

      Long press

      onLongPress A pointer has remained in contact with the screen at the same location for a long period of time.

      Vertical drag

      onVerticalDragStart A pointer has contacted the screen and might begin to move vertically. onVerticalDragUpdate A pointer that is in contact with the screen and moving vertically has moved in the vertical direction. onVerticalDragEnd A pointer that was previously in contact with the screen and moving vertically is no longer in contact with the screen and was moving at a specific velocity when it stopped contacting the screen.

      Horizontal drag

      onHorizontalDragStart A pointer has contacted the screen and might begin to move horizontally. onHorizontalDragUpdate A pointer that is in contact with the screen and moving horizontally has moved in the horizontal direction. onHorizontalDragEnd A pointer that was previously in contact with the screen and moving horizontally is no longer in contact with the screen and was moving at a specific velocity when it stopped contacting the screen.


      onPanStart A pointer has contacted the screen and might begin to move horizontally or vertically. This callback causes a crash if onHorizontalDragStart or onVerticalDragStart is set. onPanUpdate A pointer that is in contact with the screen and is moving in the vertical or horizontal direction. This callback causes a crash if onHorizontalDragUpdate or onVerticalDragUpdate is set. onPanEnd A pointer that was previously in contact with screen is no longer in contact with the screen and is moving at a specific velocity when it stopped contacting the screen. This callback causes a crash if onHorizontalDragEnd or onVerticalDragEnd is set.

      Adding gesture detection to widgets

      To listen to gestures from the widgets layer, use a GestureDetector .

      If you’re using Material Components, many of those widgets already respond to taps or gestures. For example, IconButton and TextButton respond to presses (taps), and ListView responds to swipes to trigger scrolling. If you are not using those widgets, but you want the “ink splash” effect on a tap, you can use InkWell .

      Gesture disambiguation

      At a given location on screen, there might be multiple gesture detectors. All of these gesture detectors listen to the stream of pointer events as they flow past and attempt to recognize specific gestures. The GestureDetector widget decides which gestures to attempt to recognize based on which of its callbacks are non-null.

      When there is more than one gesture recognizer for a given pointer on the screen, the framework disambiguates which gesture the user intends by having each recognizer join the gesture arena. The gesture arena determines which gesture wins using the following rules:

      At any time, a recognizer can declare defeat and leave the arena. If there’s only one recognizer left in the arena, that recognizer is the winner.

      At any time, a recognizer can declare victory, which causes it to win and all the remaining recognizers to lose.

      For example, when disambiguating horizontal and vertical dragging, both recognizers enter the arena when they receive the pointer down event. The recognizers observe the pointer move events. If the user moves the pointer more than a certain number of logical pixels horizontally, the horizontal recognizer declares victory and the gesture is interpreted as a horizontal drag. Similarly, if the user moves more than a certain number of logical pixels vertically, the vertical recognizer declares victory.

      The gesture arena is beneficial when there is only a horizontal (or vertical) drag recognizer. In that case, there is only one recognizer in the arena and the horizontal drag is recognized immediately, which means the first pixel of horizontal movement can be treated as a drag and the user won’t need to wait for further gesture disambiguation.

      Except as otherwise noted, this work is licensed under a Creative Commons Attribution 4.0 International License, and code samples are licensed under the BSD License.

      To make a user interface element clickable with the tap gesture, create a TapGestureRecognizer instance, handle the Tapped event and add the new gesture recognizer to the GestureRecognizers collection on the user interface element. The following code example shows a TapGestureRecognizer attached to an Image element:

      By default the image will respond to single taps. Set the NumberOfTapsRequired property to wait for a double-tap (or more taps if required).

      When NumberOfTapsRequired is set above one, the event handler will only be executed if the taps occur within a set period of time (this period is not configurable). If the second (or subsequent) taps do not occur within that period they are effectively ignored and the ‘tap count’ restarts.

      Using Xaml

      A gesture recognizer can be added to a control in Xaml using attached properties. The syntax to add a TapGestureRecognizer to an image is shown below (in this case defining a double tap event):

      The code for the event handler (in the sample) increments a counter and changes the image from color to black & white.

      Using ICommand

      Applications that use the Model-View-ViewModel (MVVM) pattern typically use ICommand rather than wiring up event handlers directly. The TapGestureRecognizer can easily support ICommand either by setting the binding in code:

      The complete code for this view model can be found in the sample. The relevant Command implementation details are shown below:

      Google released the first Developer Preview of Android 12 the other day, and we’ve been digging into the code to find everything that’s new. One of the most exciting changes we’ve spotted is an overhaul to how Android detects back swipe gestures. If implemented, Android 12 will use machine learning models to predict when the user intends to use the back gesture.

      With the launch of Android 10, Google introduced its fullscreen gestural navigation system. Android’s gesture navigation system places a pill at the bottom of the screen that you can interact with to switch between apps, open up the recent apps interface, or go to the homescreen. The back button, meanwhile, was replaced with an inward swipe gesture that can be triggered from the left or right side of the screen. Much ink has been spilled about the problem with Android’s back gesture, but to Google’s credit, they’ve made the experience consistent across the ecosystem and have provided APIs for developers to ensure compatibility with the gesture. While plenty of apps have shifted away from using a Navigation Drawer, there are still plenty of apps where the back gesture can conflict with the in-app UI. To solve this problem, Google is testing a new machine learning-based approach to back gesture detection in Android 12.

      How Android’s back gesture currently works is as follows. An invisible trigger area exists at nearly all times on both sides of the screen. This trigger area extends between 18dp-40dp in width from the sides of the screen depending on the user-defined back sensitivity setting. The user can trigger a back gesture by simply placing a finger anywhere within the inset and then moving that finger inward past a minimum distance. Google used phone screen heatmaps when designing the back gesture insets, and they settled on recognition areas that users feel are ergonomic and one-handed friendly.

      Gesture navigation in Android 10+. Source: Google.

      The problem with this approach, as Google themselves admit, is that some users still swipe to open navigation drawers, which conflicts with the back gesture. Every app is designed differently, but the back gesture trigger area still says the same. This one-size-fits-all approach to the back gesture thus doesn’t play nicely with how some apps are designed, so that’s why Google is experimenting with machine learning to replace the current model.

      While investigating the changes that Google made to the double-tap back gesture in Android 12, XDA Recognized Developer Quinny899 discovered the presence of a new TensorFlow Lite model and vocab file called “backgesture.” The latter contains a list of 43,000 package names for both popular and obscure Android apps, including 2 of Quinny899’s own apps. We believe this list contains the apps that Google trained its machine learning model against — ie. they determined the most frequent start and end points for the back gesture on an app-by-app basis. Digging deeper, we discovered the machine learning model is referenced in the updated EdgeBackGestureHandler class in the SystemUI of Android 12. If a feature flag is enabled, then it seems that Android 12 will use the ML model to predict if the user intended to perform a back gesture or if they simply wanted to navigate in the app. The data fed to the ML model for inferencing includes the start and end points of the gesture, whether the app is in the list, and the display’s width in pixels. Alternatively, if the feature flag is disabled, then Android 12 simply reverts back to the standard back swipe detection method (ie. insets).

      Currently, the ML-based back gesture prediction is disabled by default in Android 12 Developer Preview 1. It’s possible that Google may scrap this approach if it ends up not being superior to the existing inset-based model. However, we won’t know for sure until Google unveils the Android 12 Beta in a couple of months, as that’s the time that Google usually reveals its bigger changes to Android.

      Thanks to PNF Software for providing us a license to use JEB Decompiler, a professional-grade reverse engineering tool for Android applications.

      With watchOS 8's AssistiveTouch, those with newer Apple Watches can navigate them with a pinch, clench, or wave.

      How to use trigger to control android with gestures (Photo: Angela Moscaritolo)

      Under normal circumstances, you can control your Apple Watch by tapping the screen, hitting the side button, and pressing or turning the Digital Crown. But if your fingers are too large to accurately tap such a small screen, you’re wearing gloves and can’t make contact with the screen, or have limited motor functions, you don’t actually have to touch your watch to use it.

      By enabling the AssistiveTouch and Hand Gestures features, you can access the display, activate the Digital Crown, trigger the side button, move an onscreen pointer, and perform other actions, all without touching the watch itself. Instead of tapping the screen, you use your watch-wearing hand to perform specific actions by pinching or double pinching your fingers, clenching or double clenching your hand, and tilting your arm.

      AssistiveTouch is only supported by the Apple Watch Series 6, Series 7, and SE. You will also need to be running iOS 15 or higher on your iPhone and watchOS 8 or higher on your watch. To directly update either device, go to Settings > General > Software Update. You’ll be told that your OS is up to date or be prompted to download the latest update.

      Apple Watch Series 7 Review

      Apple Watch SE Review

      Enable Hand Gestures

      To enable AssistiveTouch on your iPhone, open the Watch app. At the My Watch screen, tap Accessibility and select AssistiveTouch. To enable it on the Apple Watch, open Settings > Accessibility > AssistiveTouch, then turn on the switch for AssistiveTouch. Enabling it on one device automatically enables it on the other.

      Next, tap the Hand Gestures option and turn on the switch. To try out the gestures, tap the Learn more link under the entry for Hand Gestures. Tap the entry for each gesture—Pinch, Double Pinch, Clench, and Double Clench. If you do this from your phone, the Watch app will direct you to try it out on your watch. Follow the onscreen instructions and diagrams to practice each gesture. When finished, tap Done.

      After you’ve enabled Hand Gestures in the Accessibility settings screen, you must turn on the feature each time you want to use it. However, you can do this with a gesture. Double clench your hand to activate hand gestures. You’ll hear a thumping sound indicating that the feature is active and a cursor will appear on the screen.

      How to Use Hand Gestures

      By default, pinching your thumb and forefinger once moves the cursor on the watch to the next item in a list or on a screen. Pinching your fingers twice moves the cursor to the previous item. Clenching your hand once activates a tap to select or open the current item. Clenching your hand twice displays the Action Menu with icons to activate a variety of different commands.

      The Action Menu allows you to: activate the Digital Crown; move to the System menu; scroll left, right, up, or down; turn the Digital Crown up or down; hear the time spoken aloud; customize your current watch face; switch between pointer and gesture mode; autoscroll the screen; and put the watch to sleep.

      The System menu displays a submenu with access to the Notification Center, Control Center, the Dock, the Home Screen, Apple Pay, Siri, and the Side button. The hand gestures are all context sensitive, so what they do varies based on your current screen or location.

      Now let’s say the screen is currently displaying one of your watch faces with several complications available. Pinch your fingers to move from one complication to another; double-pinch them to move back to the previous complication. As you do this, notice that the cursor highlights the current complication.

      Next, maybe you want to check your calendar appointments. With the cursor on the date complication, clench your hand to activate it and retrieve your calendar. Pinch your fingers to cycle through each event and then clench your hand to view the details on a specific event.

      Double-clench your hand to trigger the Action menu. With the cursor on the first icon for Press Crown, clench your hand to activate it, thereby bringing you back to your previous watch face.

      Activate Motion Pointer

      You can also activate and control a Motion Pointer to move around the screen by tilting the watch up and down and side to side. From any screen, double-clench your hand to trigger the Action Menu. Keep pinching your fingers until you reach the Interaction icon, then clench your fist to open the Interaction menu. Clench your hand to activate Motion Pointer.

      Now tilt your hand up or down or side to side to move the pointer around the screen. When the pointer is on top of an item you wish to activate, keep your hand still until the ring completes one motion around the circular pointer.

      Customize Gestures and Motions

      To customize the hand gestures and Motion Pointer, open the Watch app on your phone or go to Settings on your watch. Tap Accessibility > AssistiveTouch > Hand Gestures and you can change the action for each of the four gestures. Go back to the previous screen and tap Motion Pointer to change the sensitivity, tolerance, and hot edges for the pointer.

      Choose Scanning Style to change how you move to different items on the screen. With Manual scanning, you use the pinch or double-pinch gesture to manually move between different items on the screen or in a menu. With Automatic Scanning, each item on the screen or in a menu is automatically selected one after the other. If you enable Automatic Scanning, you can also control the speed of the scan.

      There are other accessibility features to enable under the AssistiveTouch menu. You can enable High Contrast and change the color of the cursor to better see the item selected by the cursor. Tap Customize Menu to tweak the Action Menu and add actions, change the position of actions, alter the size of the menu, and modify the autoscroll speed. Select Enable Confirm with AssistiveTouch if you wish to use AssistiveTouch with Apple Pay.

      Apple Fan?

      Sign up for our Weekly Apple Brief for the latest news, reviews, tips, and more delivered right to your inbox.

      This newsletter may contain advertising, deals, or affiliate links. Subscribing to a newsletter indicates your consent to our Terms of Use and Privacy Policy. You may unsubscribe from the newsletters at any time.

      Google’s raft of new features coming to Android this fall includes some interesting new accessibility features. There are new facial gesture controls designed for people with motor impairments, as well as a new handwriting recognition feature for Lookout, a Google app that uses a phone’s camera to help people with low vision or blindness. There are also improvements coming for Google Assistant, Digital Wellbeing, Nearby Share, and Google’s Android keyboard.

      There are two parts to Android’s facial gesture controls. Camera Switches (which has previously been spotted in Android 12’s beta) sits within Android’s existing Accessibility Suite and lets you use gestures like opening your mouth or raising your eyebrows to activate various commands. The second part is Project Activate, a standalone app that’s designed to help people communicate. Here, facial gestures can be set to trigger actions like playing audio or sending a text message.

      Using facial gestures to control a weather app. Image: Google

      Meanwhile, Google says Lookout’s handwriting recognition will be able to read out Latin-based languages and is accessible from its Documents mode. Euro and Indian Rupees are also being added to the app’s Currency mode.

      Beyond accessibility, Gboard is also seeing improvements. Most interesting is that on all devices running Android 11 and up, Google’s Android keyboard will be able to use Smart Compose to finish sentences, similar to what’s already possible in specific services like Google Docs and Gmail. This feature has previously been exclusive to Pixel phones. Copying and pasting is also being updated to automatically separate out separate contact information and details like phone numbers and email addresses when they’re copied in a single chunk of text. Finally, Gboard will also proactively suggest sharing a screenshot if you immediately switch to a messaging app after taking one.

      The “Heads Up” Digital Wellbeing feature, which reminds people to stop looking down at their phones while they’re walking, is expanding from just Pixels to all Android phones, while asking Google Assistant to “open my reminders” now offers a one-stop-shop for managing all reminders added to the voice assistant. Nearby Share is also being updated to offer new visibility settings, letting you control who can see your device and send files. It’s a real grab bag of features that should offer something for everyone.

      Google isn’t offering exact release dates for any of these features, and there’s no mention of whether any of them will be exclusive to the upcoming Android 12. So expect most of them to gradually roll out over the course of this fall, which is generally considered to run until the start of December.

      Today, most Web content is designed for keyboard and mouse input. However, devices with touch screens (especially portable devices) are mainstream and Web applications can either directly process touch-based input by using Touch Events or the application can use interpreted mouse events for the application input. A disadvantage to using mouse events is that they do not support concurrent user input, whereas touch events support multiple simultaneous inputs (possibly at different locations on the touch surface), thus enhancing user experiences.

      The touch events interfaces support application specific single and multi-touch interactions such as a two-finger gesture. A multi-touch interaction starts when a finger (or stylus) first touches the contact surface. Other fingers may subsequently touch the surface and optionally move across the touch surface. The interaction ends when the fingers are removed from the surface. During this interaction, an application receives touch events during the start, move, and end phases. The application may apply its own semantics to the touch inputs.


      Touch events consist of three interfaces ( Touch , TouchEvent and TouchList ) and the following event types:

        – fired when a touch point is placed on the touch surface.
      • touchmove – fired when a touch point is moved along the touch surface.
      • touchend – fired when a touch point is removed from the touch surface.
      • touchcancel – fired when a touch point has been disrupted in an implementation-specific manner (for example, too many touch points are created).

      The Touch interface represents a single contact point on a touch-sensitive device. The contact point is typically referred to as a touch point or just a touch. A touch is usually generated by a finger or stylus on a touchscreen, pen or trackpad. A touch point’s properties include a unique identifier, the touch point’s target element as well as the X and Y coordinates of the touch point’s position relative to the viewport, page, and screen.

      The TouchList interface represents a list of contact points with a touch surface, one touch point per contact. Thus, if the user activated the touch surface with one finger, the list would contain one item, and if the user touched the surface with three fingers, the list length would be three.

      The TouchEvent interface represents an event sent when the state of contacts with a touch-sensitive surface changes. The state changes are starting contact with a touch surface, moving a touch point while maintaining contact with the surface, releasing a touch point and canceling a touch event. This interface’s attributes include the state of several modifier keys (for example the shift key) and the following touch lists:

        – a list of all of the touch points currently on the screen. – a list of the touch points on the target DOM element. – a list of the touch points whose items depend on the associated event type:

        • For the touchstart event, it is a list of the touch points that became active with the current event.
        • For the touchmove event, it is a list of the touch points that have changed since the last event.
        • For the touchend event, it is a list of the touch points that have been removed from the surface (that is, the set of touch points corresponding to fingers no longer touching the surface).

        Together, these interfaces define a relatively low-level set of features, yet they support many kinds of touch-based interaction, including the familiar multi-touch gestures such as multi-finger swipe, rotation, pinch and zoom.

        From interfaces to gestures

        An application may consider different factors when defining the semantics of a gesture. For instance, the distance a touch point traveled from its starting location to its location when the touch ended. Another potential factor is time; for example, the time elapsed between the touch’s start and the touch’s end, or the time lapse between two consecutive taps intended to create a double-tap gesture. The directionality of a swipe (for example left to right, right to left, etc.) is another factor to consider.

        The touch list(s) an application uses depends on the semantics of the application’s gestures. For example, if an application supports a single touch (tap) on one element, it would use the targetTouches list in the touchstart event handler to process the touch point in an application-specific manner. If an application supports two-finger swipe for any two touch points, it will use the changedTouches list in the touchmove event handler to determine if two touch points had moved and then implement the semantics of that gesture in an application-specific manner.

        Browsers typically dispatch emulated mouse and click events when there is only a single active touch point. Multi-touch interactions involving two or more active touch points will usually only generate touch events. To prevent the emulated mouse events from being sent, use the preventDefault() method in the touch event handlers. For more information about the interaction between mouse and touch events, see Supporting both TouchEvent and MouseEvent .

        Basic steps

        This section contains a basic usage of using the above interfaces. See the Touch Events Overview for a more detailed example.

        Register an event handler for each touch event type.

        Process an event in an event handler, implementing the application’s gesture semantics.

        Users interact with mobile apps mainly through touch. They can use a combination of gestures, such as tapping on a button, scrolling a list, or zooming on a map. React Native provides components to handle all sorts of common gestures, as well as a comprehensive gesture responder system to allow for more advanced gesture recognition, but the one component you will most likely be interested in is the basic Button.

        Displaying a basic button​

        Button provides a basic button component that is rendered nicely on all platforms. The minimal example to display a button looks like this:

        This will render a blue label on iOS, and a blue rounded rectangle with light text on Android. Pressing the button will call the "onPress" function, which in this case displays an alert popup. If you like, you can specify a "color" prop to change the color of your button.

        Go ahead and play around with the Button component using the example below. You can select which platform your app is previewed in by clicking on the toggle in the bottom right and then clicking on "Tap to Play" to preview the app.


        If the basic button doesn't look right for your app, you can build your own button using any of the "Touchable" components provided by React Native. The "Touchable" components provide the capability to capture tapping gestures, and can display feedback when a gesture is recognized. These components do not provide any default styling, however, so you will need to do a bit of work to get them looking nicely in your app.

        Which "Touchable" component you use will depend on what kind of feedback you want to provide:

        Generally, you can use TouchableHighlight anywhere you would use a button or link on web. The view's background will be darkened when the user presses down on the button.

        You may consider using TouchableNativeFeedback on Android to display ink surface reaction ripples that respond to the user's touch.

        TouchableOpacity can be used to provide feedback by reducing the opacity of the button, allowing the background to be seen through while the user is pressing down.

        If you need to handle a tap gesture but you don't want any feedback to be displayed, use TouchableWithoutFeedback.

        In some cases, you may want to detect when a user presses and holds a view for a set amount of time. These long presses can be handled by passing a function to the onLongPress props of any of the "Touchable" components.

        Let's see all of these in action:

        Scrolling and swiping​

        Gestures commonly used on devices with touchable screens include swipes and pans. These allow the user to scroll through a list of items, or swipe through pages of content. For these, check out the ScrollView Core Component.

        This component allows for implementing swipeable rows or similar interaction. It renders its children within a panable container allows for horizontal swiping left and right. While swiping one of two "action" containers can be shown depends on whether user swipes left or right (containers can be rendered by renderLeftActions or renderRightActions props).


        Similarly to the DrawerLayout , Swipeable component isn't exported by default from the react-native-gesture-handler package. To use it, import it in the following way:


        friction #

        a number that specifies how much the visual interaction will be delayed compared to the gesture distance. e.g. value of 1 will indicate that the swipeable panel should exactly follow the gesture, 2 means it is going to be two times "slower".

        leftThreshold #

        distance from the left edge at which released panel will animate to the open state (or the open panel will animate into the closed state). By default it's a half of the panel's width.

        rightThreshold #

        distance from the right edge at which released panel will animate to the open state (or the open panel will animate into the closed state). By default it's a half of the panel's width.

        overshootLeft #

        a boolean value indicating if the swipeable panel can be pulled further than the left actions panel's width. It is set to true by default as long as the left panel render method is present.

        overshootRight #

        a boolean value indicating if the swipeable panel can be pulled further than the right actions panel's width. It is set to true by default as long as the right panel render method is present.

        overshootFriction #

        a number that specifies how much the visual interaction will be delayed compared to the gesture distance at overshoot. Default value is 1, it mean no friction, for a native feel, try 8 or above.

        onSwipeableLeftOpen #

        This callback is deprecated and will be removed in the next version. Please use onSwipeableOpen(direction)

        method that is called when left action panel gets open.

        onSwipeableRightOpen #

        This callback is deprecated and will be removed in the next version. Please use onSwipeableOpen(direction)

        method that is called when right action panel gets open.

        onSwipeableOpen #

        method that is called when action panel gets open (either right or left). Takes swipe direction as an argument.

        onSwipeableClose #

        method that is called when action panel is closed. Takes swipe direction as an argument.

        onSwipeableLeftWillOpen #

        This callback is deprecated and will be removed in the next version. Please use onSwipeableWillOpen(direction)

        method that is called when left action panel starts animating on open.

        onSwipeableRightWillOpen #

        This callback is deprecated and will be removed in the next version. Please use onSwipeableWillOpen(direction)

        method that is called when right action panel starts animating on open.

        onSwipeableWillOpen #

        method that is called when action panel starts animating on open (either right or left). Takes swipe direction as an argument.

        onSwipeableWillClose #

        method that is called when action panel starts animating on close. Takes swipe direction as an argument.

        renderLeftActions #

        method that is expected to return an action panel that is going to be revealed from the left side when user swipes right. This map describes the values to use as inputRange for extra interpolation: AnimatedValue: [startValue, endValue]

        progressAnimatedValue: [0, 1] dragAnimatedValue: [0, +]

        To support rtl flexbox layouts use flexDirection styling.

        renderRightActions #

        method that is expected to return an action panel that is going to be revealed from the right side when user swipes left. This map describes the values to use as inputRange for extra interpolation: AnimatedValue: [startValue, endValue]

        progressAnimatedValue: [0, 1] dragAnimatedValue: [0, -]

        To support rtl flexbox layouts use flexDirection styling.

        containerStyle #

        style object for the container (Animated.View), for example to override overflow: 'hidden' .

        childrenContainerStyle #

        style object for the children container (Animated.View), for example to apply flex: 1 .

        enableTrackpadTwoFingerGesture (iOS only)#

        Enables two-finger gestures on supported devices, for example iPads with trackpads. If not enabled the gesture will require click + drag, with enableTrackpadTwoFingerGesture swiping with two fingers will also trigger the gesture.


        Using reference to Swipeable it's possible to trigger some actions on it

        close #

        method that closes component.

        openLeft #

        method that opens component on left side.

        openRight #

        method that opens component on right side.


        See the swipeable example from GestureHandler Example App or view it directly on your phone by visiting our expo demo.

        Gesture controls make it super easy to navigate quickly around your Android device.

        Wouldn’t it be great if you could turn off your phone’s screen just by swiping down from the top of your display? Or access your recent apps by swiping in from the right? These shortcuts would allow you to use your Android smartphone or tablet in a whole new way.

        You can decide what gestures trigger what result, giving you full control over your device. Here are a few apps that will let you setup these kinds of gestures.

        1. All In One Gestures [No Longer Available]

        One of the many custom gestures you can create with All In One Gestures is taking a screenshot without touching a single button. Say goodbye to the classic two-button combination that many Android users have had to deal with.

        To create this custom gesture, open the app and make sure that you’re in the Swipe tab. Tap on Enable and choose the area you’ll need to swipe to take the screenshot. Once it’s selected, choose screenshot, and you’ll see a grayish area appear in the place that you chose. That’s where you need to swipe to take your screenshot.

        You can also create custom gestures by using the hard keys and the status bar on your Android device as well. Holding the home button could turn off the screen, or double-tapping the back button could kill the app you’re using.

        Download — All In One Gestures (Free)

        2. Quickify – Gesture Shortcut

        With Quickify, you can quickly open any app or feature on your phone regardless of what you’re viewing. Are you viewing your phone’s gallery but need to switch over to WhatsApp to send a message? First, set up the custom gesture by tapping on the plus sign in the top right, then draw your gesture.

        Tap on the Application option and choose WhatsApp (or any other app you want). You should now see your masterpiece on the list of created custom gestures.

        Simply repeat the process if you want to add a custom gesture to open a URL, open an app, make a call, send a text message, or a toggle a certain feature on your phone.

        Download — Quickify (Free)

        3. Nova Launcher

        Nova Launcher will not only let you add the custom gestures you want, but it can also change the way your phone looks as well by replacing your standard launcher. Once it’s installed, open the app and go to Settings > Gestures & inputs > Gestures.

        The actions that can be performed are nicely organized into three tabs: Nova, apps, and shortcuts.

        In the Nova tab, you can create custom gestures to open the app drawer, lock the screen, open recent apps, or do other similar functions. With the Apps tab, you can assign certain gestures to open specific apps.

        The Shortcuts tab allows you to assign a gesture to actions such as making a call, opening a specific site in your browser, or creating a spreadsheet, etc. Unfortunately, some of these are “Prime” features, meaning that you’ll have to upgrade to Nova Prime to access them.

        Download — Nova Launcher (Free)

        Download — Nova Launcher Prime ($4.99)

        4. Dolphin Web Browser

        In addition to being a generally customizable browser, Dolphin is also home to some fantastic custom gestures. To start creating them, open the browser and tap on the gray dolphin at the bottom. Tap on the cog wheel in the bottom right and choose the Gesture and Sonar option.

        If you want a gesture to open a site, type in the URL, then press the Add+ option. You can either accept the gesture that has already been created or you can create your own by tapping on the Redo button.

        In the More Actions section, you can change other gestures that have already been created or you can make your own. For example, you can create custom gestures for closing all tabs, accessing your browser history, opening your most visited sites, and more.

        Using these gestures is simple. When you open the browser, just long-press on the gray dolphin and slide upwards and to the left. When the new window appears, draw the gesture you created, and you’re good to go.

        Download — Dolphin Browser (Free)

        5. Gravity Gestures

        Gravity Gestures works a little differently than the other apps on this list. Instead of drawing on the screen, you simply rotate your wrist with the phone in your hand. Those with modern Motorola devices like the amazing Moto G4 Plus have used a similar feature called Moto Gesture.

        When you first download Gravity Gestures, you’ll be taken through all the gestures you can set up. When the tutorial is done, you can toggle on the Gravity Gesture service. To create a custom gesture, tap on the red button in the bottom right and choose the gesture you want to use.

        You can choose between X-rotation, Y-rotation, Z-rotation, and Shake. After that, choose an action such as toggling the flashlight, activating a voice assistant, opening a website, making a call, toggling Bluetooth, etc.

        If you experience problems with the app not recognizing your gestures, you can modify its sensitivity in the settings. Tap on the sensitivity option and choose between high, medium, or low. The highest setting will make the gestures easier to do, but could result in accidental activations.

        Download — Gravity Gestures (Free)

        What Custom Gestures Do You Use?

        Creating custom gestures on your Android device is easy and can help you save valuable time. Now you can navigate around your smartphone or tablet with some slick gestures.

        You’ve seen various ways you can create your own gestures, but new methods emerge every day. Do you use a custom gesture app that is not on the list? Do you have a particular gesture that you rely on daily? Tell us your thoughts in the comments.

        Though designed for the iPhone, Apple’s AirPods are also compatible with Android smartphones and tablets, so you can take advantage of Apple’s wire-free tech even if you’re an Android user or have both Android and Apple devices.

        alt=”androidairpods” width=”800″ height=”450″ />
        You do, of course, lose some bells and whistles like Apple’s unique AirPods pairing features. AirPods work like any other Bluetooth headphones on an Android device, and there are ways to restore at least some of their functionality through Android apps.

        AirPod Features That Don’t Work on Android (Out of the Box)

        When paired with an ‌iPhone‌, iPad, Apple Watch, or Mac, the AirPods offer a rich set of features thanks to the W1 wireless chip in the first generation version or the H1 chip in the AirPods 2 or AirPods 3, the accelerometer and other sensors, and deep integration with Apple’s devices.

        Here’s a list of AirPods features you lose out on when using the AirPods with Android:

        • Siri. On ‌iPhone‌, you can press or tap to access Siri for doing things like changing songs, adjusting volume, or just asking simple questions. If you have AirPods 2 or 3, you can also use “Hey ‌Siri‌” to activate ‌Siri‌.
        • Customizing Double Tap. In the Settings app on an iOS device, you can change what the tap/press does. Options include accessing ‌Siri‌, Play/Pause, Next Track, and Previous Track.
        • Automatic switching. AirPods are linked to an iCloud account for Apple users, which allows them to easily switch between using the AirPods with an ‌iPad‌, ‌iPhone‌, Apple Watch, and Mac.
        • Simple setup. Pairing with an iOS device only requires opening the case near said device and following the quick setup steps.
        • Checking AirPods battery. On the ‌iPhone‌ and Apple Watch, you can ask ‌Siri‌ about the AirPods battery life or check it from the Today center on ‌iPhone‌ or the Control Center on Apple Watch. Luckily, there is a way to replace this functionality on Android with the AirBattery app or Assistant Trigger.
        • Automatic ear detection. On ‌iPhone‌, when you remove an AirPod from your ear, it pauses whatever you’re listening to until you put the AirPod back into your ear.
        • Single AirPod listening. Listening to music with a single AirPod is limited to iOS devices because it uses ear detection functionality. On Android, you need to have both AirPods out of the case for them to connect.
        • Spatial Audio. When paired with Apple devices, ‌AirPods 3‌ (and AirPods Pro) offer Spatial Audio support for Apple Music, allowing for a more immersive listening experience that makes it sound like audio is coming from all around you.

        AirPod Features That Work on Android

        Out of the box, AirPods functionality on Android is quite limited, but the double tap or press feature works. When you double tap on one of the AirPods (or press on the Force Sensor on the stem with ‌AirPods 3‌), it will play or pause the music. If you’ve customized your AirPods using an iOS device, next track and previous track gestures will also work, but ‌Siri‌ won’t, nor will “Hey ‌Siri‌” on AirPods 2 or ‌AirPods 3‌ as that requires an Apple device.

        alt=”AirPods 3 Feature Red” width=”1633″ height=”918″ />
        One additional benefit to AirPods on Android — Bluetooth connectivity distance. AirPods generally have a much longer Bluetooth range than other Bluetooth-enabled headphones, and this is true on both Android and iOS.

        AirPods lose the rest of their unique functionality on Android, but there are a few Android apps that are designed to restore some of it, adding to what you can do with AirPods on Android.

        How to Add Back Lost AirPod Functionality

        AirBattery – AirBattery adds a feature that lets you see the charge level of your AirPods. It includes battery levels for the left AirPod, right AirPod, and charging case, much like the battery interface on iOS devices. It also has an experimental ear detection feature when used with Spotify, which can pause music when you remove an AirPod.

        AssistantTrigger – AssistantTrigger also lets you see the battery level of your AirPods, and it also says it adds ear detection features. Most notably, it can be used to change the tap gestures, letting you set up Google Assistant to be triggered with a double tap.

        How to Pair AirPods to an Android Smartphone

        AirPods pair to an Android smartphone like any other Bluetooth device, but you there are some specific steps to follow.

        1. Open up the AirPods case.
        2. Go to the Bluetooth settings on your Android device.
        3. On the AirPods case, hold the pairing button at the back.
        4. Look for AirPods in the list of Bluetooth accessories and then tap the “Pair” button.

        After tapping “Pair,” the AirPods should successfully connect to your Android device.

        Do AirPods Work on Android?

        Even if you use Android devices exclusively, the AirPods are a great wire-free earbud option that outperform many other Bluetooth earbuds available for Android devices. If you have both Android and iOS devices, AirPods are a no brainer because you’ll be able to use them on both devices with few tradeoffs if you download the appropriate Android apps.

        Even without many of the bells and whistles available on iOS devices, AirPods have some attractive features that may appeal to Android users, though there are wire-free Android specific options that Android users might want to look at.

        alt=”google pixel buds a series” width=”1400″ height=”788″ />
        Many AirPods users find them to be quite comfortable and stable in the ears, with little risk of them falling out, and the battery life is absolutely appealing. Apple in 2021 introduced the ‌AirPods 3‌, which have a more refined fit that’s even more comfortable in the ears. AirPods have a charging case that provides 24 hours of battery life in a portable, compact form factor. The case is also easy to charge, so long as you have a Lightning cable.

        alt=”airpods 3 magsafe case” width=”1144″ height=”752″ />
        There’s one major reason that you might want to avoid AirPods on Android, and that’s audio quality. Apple’s AAC codec does not perform as well on Android as it does on the ‌iPhone‌, so there may be degraded streaming on Android because of the way Android handles Bluetooth codecs.

        How to use trigger to control android with gestures

        Last updated: November 18th, 2019 at 09:55 UTC+01:00

        Gestures are slowly but surely becoming a mainstream method of navigation on smartphones. Navigation gestures were introduced to Samsung Galaxy smartphones with Android Pie, and you can select whether you want to use gestures or use traditional navigation buttons. On Android 10, you can even select between two different gesture systems: Samsung’s own implementation and the stock Android system that you find on Google’s Pixel devices.

        While navigation gestures are great to have, on Galaxy Note smartphones, gestures can interfere when you use the S Pen for drawing or writing near the edges of the screen. But, on the Galaxy Note 10, you can easily prevent that from happening thanks to a setting that, when enabled, will only allow you to perform navigation gestures when the display is operated by your fingers.

        Block navigation gestures with S Pen on your Galaxy Note 10

        The setting is called Block gestures with S Pen, and many of you will probably have seen it when you enabled gestures on your Galaxy Note 10 the first time. If you didn’t, it’s quite simple to enable it. Open the Settings app on your Galaxy Note 10/Note 10+, select Display, scroll down and tap Navigation bar. Here, if you are using gestures, you will see the Block gestures with S Pen option at the bottom.

        How to use trigger to control android with gestures

        The Block gestures with S Pen setting can also be found on the Android 10 beta on the Galaxy Note 10 and works with both types of navigation gestures that are available on Android 10. The Galaxy Note 8 and Galaxy Note 9 do not have it on Android Pie, but it should come to the Galaxy Note 9 with the Android 10 update.

        Do you use navigation gestures on your Galaxy Note 10/10+ and have found them to be a problem when using the S Pen? Let us know in the comments below, and also check out more such tips for your phone in our Galaxy Note 10 tips and tricks section.

        The Google Assistant is a core part of Android — Google even made it possible to launch the Assistant by long-pressing the home button. But with Android 10’s new gesture controls, there isn’t a home button to long-press, so Google created a new gesture to replace it.

        With Android 10 lacking any navigational buttons (depending on your settings), you now need to use gestures to pull up actions once associated with buttons and long presses. When it comes to Google Assistant, the new method isn’t as easy as a long press of a button and will take some practice. But once you get the hang of it, you can quickly pull up Google Assistant as you did before. And if you continue to struggle, you can always take advantage of some phones’ dedicated Assistant buttons or squeeze sensors.

        • Don’t Miss: You Can Still Swipe to Open Side Menus in Android 10

        Launching Google Assistant

        From either corner of the bottom edge, swipe up and slightly to the center (about 7°). You don’t need to swipe up too far, about an inch. Once again, this will take some practice as it isn’t the easiest gesture to pull, but once you get it, it does work.

        How to use trigger to control android with gestures

        If you’re struggling with this gesture, there’s another one to try: swipe in from either bottom corner, then quickly swipe back off the screen. So a little scrubbing motion with your thumb that starts from the corner. A quick back-and-forth.

        If you are still struggling, you do have some options. First, phones such as the LG G8 have a dedicated Google Assistant button to launch the app quickly. Then for Samsung users, it is pretty easy to reprogram the Bixby button to launch Google Assistant by either using the Bixby app or a third party option. For Pixel 3, 3 XL, and HTC U12+ users, you can program the Active Edge (called Edge Sense on HTC) to launch the Assistant as well.

        Finally, there is the app Button Mapper. Using this free third-party app, you can reprogram any physical button on your phone to launch an app, including Google Assistant. Check out the guide below for more information.

        Keep Your Connection Secure Without a Monthly Bill. Get a lifetime subscription to VPN Unlimited for all your devices with a one-time purchase from the new Gadget Hacks Shop, and watch Hulu or Netflix without regional restrictions, increase security when browsing on public networks, and more.

        DSO 2.5 and later use a simple gesture based user interface called Act On Touch.

        How to use trigger to control android with gestures

        Act On Touch makes mouse, track-pad and touch-screen use easy. Whether you use a PC or Mac, netbook or tablet running Windows Touch, Android or iOS, DSO works the same way*. Act On Touch simply means you can click, drag or select to adjust almost any DSO parameter using your finger or stylus (on a touch-screen device) or a mouse or track-pad (on a PC or Mac) to change its value or select related functions.

        Shown here are three panels that use Act On Touch.

        • Dynamic Trigger Control (top)
        • Cursor Measurement Control (middle)
        • Zoom Timebase Control (bottom)

        Using Act On Touch is easy; click and drag up and down or left and right on a parameter to adjust its value. Click on the left or right edge of the parameter to select a previous or next value.

        Right-click (or control-click on a Mac or press-and-hold on a tablet) to pop up a context menu and double-click to open an editor to type in a value or select a default value.

        We call these UI interractions Act On Touch Gestures.

        How to use trigger to control android with gestures

        Channel Control Panel

        They apply to trigger, preview and waveform displays too.

        Objects on the display can also be clicked and/or dragged to change location or value.

        For example, the trigger level can be adjusted by moving the trigger cursor and measurement cursors or waveform feature bands on the main display can be moved to make various measurements.

        Act On Touch means DSO is more compact and informative because parameter can report a value and accept user input to change that value without unecessary scroll bars or other UI widgets.

        The channel control panel shown above provides a good example; using this panel you can choose the source and signal coupling, adjust voltage scaling, input range, probe scaling and signal offsets and perform channel calibration, all using simple Act On Touch Gestures on parameter widgets.

        Act On Touch Gestures

        Up to seven single-touch Act On Touch Gestures are suppored for each parameter:

        Click Click Center Perform primary parameter action.
        Select Double-Click Anywhere Perform secondary parameter action.
        Choose Menu-Click^ Anywhere Pop-up a context menu for the parameter.
        More Click Right Side Choose the next (or higher) value.
        Less Click Left Side Choose the previous (or lower) value.
        Change X Click-Drag+ Left/Right Adjust a parameter value continuously (X-axis).
        Change Y Click-Drag+ Up/Down Adjust a parameter value continuously (Y-axis).

        Not all parameters respond to all gestures and some respond in different ways. Some gestures apply to non-parameters (e.g. pop-up menus on buttons or lights) and multi-touch gestures apply to other objects such as waveforms and displays. These differences are explained elsewhere.

        Parameter Widget Colours

        DSO uses colour to classify all parameter types according to their purpose or usage:

        The colour of each parameter identifies it as belonging to an analog or logic channel or a cursor, time or frequency measurement. All others are general parameters which control the operation of DSO or selection various options. Logic channels use standard electronic colour codes (except D0 which is white not black) and analog channels use defaults from across the BitScope product range. Colours may be changed if necessary but this table shows the defaults.

        Parameter Types and Values

        Parameter widgets report alpha-numeric values associated with controls and/or measurements and appear in appropriate colours with or without highlights. There are four types of parameter:

        Option Alpha A set of non-numeric values, e.g. Display Mode which selects from values NORMAL, DECAY or OVERLAY.
        Selector Alpha Numeric A discrete set of numeric values, e.g. Input Range which selects from a fixed and finite set of voltage ranges supported by the device.
        Vernier Alpha Numeric A real parameter with calibrated and vernier values, e.g. Input Scale which selects from set of voltages but which also allows arbitrary scaling.
        Real Alpha Numeric A real parameter with arbitrary value, e.g. Focus Time which specifies the time (relative to the trigger point) of the center of the display.

        Option parameters show text reporting the selected option (i.e. they are not numeric) but the other parameters may report alpha-numeric values.

        Usually, alpha-numeric parameters report a numeric value. However, if the parameter is set to its default value (e.g. the input offset is zero volts) it reports the parameter name, not the value. This makes it easy to identify parameters by name (as most are set to their default at startup).

        Some parameters can be set to track others. When a parameter is tracking another it reports the name of the parameter it is tracking, not the value (e.g. the MARK cursor reports “MAX” when tracking waveform maxima). The value itself can often be seen via the tracked parameter widget or via a derived measurement elsewhere in the UI (e.g. Vpp).

        ^ Menu-Click on a Windows or Linux PC means clicking with the right mouse or track-pad button. On Mac OS X the equivalent is a Control-Click (hold the control key and click the mouse button or track-pad) and on touch-screen devices simply Press-and-Hold until the menu appears.

        You should create an Animation class in order to be able to cancel the animation. This is demonstrated below.

        The AnimationDefinition interface #

        The AnimationDefinition interface is central for defining an animation for one or more properties of a single View . The animatable properties are:

        • opacity
        • backgroundColor
        • translateX and translateY
        • scaleX and scaleY
        • rotate
        • width and height

        The AnimationDefinition interface has the following members:

        • target: The view whose property is to be animated.
        • opacity: Animates the opacity of the view. Value should be a number between 0.0 and 1.0.
        • backgroundColor: Animates the backgroundColor of the view.
        • translate: Animates the translate affine transform of the view. Value should be a Pair .
        • scale: Animates the scale affine transform of the view. Value should be a Pair .
        • rotate: Animates the rotate affine transform of the view. Value should be a number specifying the rotation amount in degrees.
        • duration: The length of the animation in milliseconds. The default duration is 300 milliseconds.
        • delay: The amount of time, in milliseconds, to delay starting the animation.
        • iterations: Specifies how many times the animation should be played. Default is 1. iOS animations support fractional iterations, i.e., 1.5. To repeat an animation infinitely, use Number.POSITIVE_INFINITY .
        • curve: An optional animation curve. Possible values are contained in the AnimationCurve. Alternatively, you can pass an instance of type UIViewAnimationCurve for iOS or android.animation.TimeInterpolator for Android.
        • width: Animates view's width.
        • height: Animates view's height.

        All members of the interface are optional and have default values with the following exceptions:

        • target is only optional when calling the animate method of a View instance since it is set automatically for you.
        • You must specify at least one property from this list: opacity, backgroundColor, scale, rotate or translate.

        The Animation class #

        The Animation class represents a set of one or more AnimationDefinitions that can be played either simultaneously or sequentially. This class is typically used when you need to animate several views together. The constructor of the Animation class accepts an array of AnimationDefinitions and a boolean parameter indicating whether to play the animations sequentially. Creating an instance of the Animation class does not start the animation playback. The class has four members:

        • play: A method that starts the animation and returns the instance it was called on for fluent animation chaining.
        • cancel: A void method that stops the animation.
        • finished: A promise that will be resolved when the animation finishes or rejected when the animation is cancelled or stops for another reason.
        • isPlaying: A boolean property returning True if the animation is currently playing.

        Animating multiple properties #

        It is easy to animate multiple properties at once; just pass the desired animatable properties and the corresponding values when calling the animate function.

        How to use trigger to control android with gestures

        Gesture navigation is one of the biggest new features in Android 10, but it comes with some radical shifts to how we use the operating system. One interaction users are still trying to learn with the Android 10 gesture system is opening side, or hamburger, menus in applications. Here are a couple of ways to do that.

        With Android 10’s new gestures, a swipe inward from either side of the display will trigger the back action. That’s bad news for side hamburger menus in applications, as the back gesture will often take users out of the application instead of opening up that menu.

        Google has already explained that the back action is a core part of Android and is used even more often than the home action, so assigning an intuitive and reliable gesture was crucial for this system. Many longtime Android users disagree, but Google says the vast majority of Android users didn’t even know a swipe could open up side menus as most would simply tap a button within the app.

        Luckily, there are still ways to open side hamburger menus in Android 10.

        Opening Android 10 side menus with ‘peeking’

        The first way to open up a side menu in Android 10 is with an action called “peeking.” This was first added in a previous beta of Android Q and, at the time, wasn’t super reliable. In the final rollout of Android 10, though, this behavior does help make it a bit easier to open up side menus.

        A long-press on the side of the display where a side menu resides should trigger the peeking behavior. After roughly a second, the menu will slightly emerge and give you the opportunity to fully slide it out. This is still a bit tricky to master, but once you’ve practiced it a handful of times, it makes opening side menus without tapping the top button a lot less frustrating.

        Opening Android 10 side menus with an angled swipe

        Another behavior that Google has never officially acknowledged is the ability to open these menus with a swipe downward at a 45-degree angle. We previously detailed this motion in a video back when gestures were first released in beta, and the gesture has only become more reliable in this final build.

        A swipe at a roughly 45-degree angle on any app with a side hamburger menu will open up that menu while using Android 10’s gesture navigation. Again, this one can take some time to master, but once learned it feels relatively natural and works well.

        Of course, you can always just tap the side menu button at the top of the screen for a guaranteed way to open up a side menu in Android 10.

        Ionic Gestures is a utility that allows developers to build custom gestures and interactions for their application in a platform agnostic manner. Developers do not need to be using a particular framework such as React or Angular, nor do they even need to be building an Ionic app! As long as developers have access to v5.0 or greater of Ionic Framework, they will have access to all of Ionic Animations.

        Building complex gestures can be time consuming. Other libraries that provide custom gestures are often times too heavy handed and end up capturing mouse or touch events and not letting them propagate. This can result in other elements no longer being scrollable or clickable.


        • JavaScript
        • Angular
        • React
        • Vue

        Developers using Ionic Core and JavaScript should install the latest version of @ionic/core .

        Developers using Ionic Core and TypeScript should install the latest version of @ionic/core .

        Developers using Angular should install the latest version of @ionic/angular . Animations can be created via the AnimationController dependency injection.

        By default, gesture callbacks do not run inside of NgZone. Developers can either set the runInsideAngularZone parameter to true when creating a gesture, or they can wrap their callbacks in an call.

        Developers using React should install the latest version of @ionic/react . Full React wrappers are coming soon!

        Developers using Ionic Vue should install the latest version of @ionic/vue .