Blog

  • ngrev

    ngrev

    ngrev

    Graphical tool for reverse engineering of Angular projects. It allows you to navigate in the structure of your application and observe the relationship between the different modules, providers, and directives. The tool performs static code analysis which means that you don’t have to run your application in order to use it.

    ngrev is not maintained by the Angular team. It’s a side project developed by the open source community.

    How to use?

    macOS

    1. Go to the releases page.
    2. Download the latest *.dmg file.
    3. Install the application.

    The application is not signed, so you may have to explicitly allow your mac to run it in System Preferences -> Security & Privacy -> General.

    Linux

    1. Go to the releases page.
    2. Download the latest *.AppImage file.
    3. Run the *.AppImage file (you may need to chmod +x *.AppImage).

    Windows

    1. Go to the releases page.
    2. Download the latest *.exe file.
    3. Install the application.

    Creating a custom theme

    You can add your own theme by creating a [theme-name].theme.json file in Electron [userData]/themes. For a sample theme see Dark.

    Application Requirements

    Your application needs to be compatible with Angular Ivy compiler. ngrev is not tested with versions older than v11. To stay up to date check the update guide on angular.io.

    Using with Angular CLI

    1. Open the Angular’s application directory.
    2. Make sure the dependencies are installed.
    3. Open ngrev.
    4. Click on Select Project and select [YOUR_CLI_APP]/src/tsconfig.app.json.

    Demo

    Demo here.

    Component template

    Themes

    Command + P

    Module Dependencies

    Release

    To release:

    1. Update version in package.json.
    2. git commit -am vX.Y.Z && git tag vX.Y.Z
    3. git push && git push --tags

    Contributors

    mgechev vik-13
    mgechev vik-13

    License

    MIT

    Visit original content creator repository https://github.com/mgechev/ngrev
  • labeller_img_python_telegram_BOT

    labeller_images_python_telegramBOT

    This is a bot to help collect data for any machine learning project.
    It was developed using the python-telegram-bot

    Usage & steps

    1. download the repo, and install python-telegram-bot:

    git clone https://github.com/diesilveira/labeller_img_python_telegram_BOT.git
    cd labeller_img_python_telegram_BOT
    pip install python-telegram-bot --upgrade
    1. Create a configuration file in the same folder as main.py with the name conf.py
      copy and paste the following text:

    TOKEN: str = 'YOUR TOKEN'
    
    #Like D:/Descargas/cleanAndDirtyImages
    PATH_FOLDER: str = 'YOUR PATH'
    LOCAL = 'false'
    
    BUTTONS = ["BUTTON1", "BUTTON2", "BUTTON3", "BUTTON4"]
    QUESTION = 'QUESTION TO THE PEOPLE ABOUT THE IMAGE?'
    CHOSE = 'Chose: '
    GREETING = ' Welcome and thanks for your help!'

    In TOKEN you must copy and paste the token of your bot, previously created.
    You can see how to at: How do I create a bot? – telegram or following these steps:
    For create a bot with telegram and get your TOKEN:

    • send /newbot to BotFather from your telegram
    • then you must to set the name, shortname and description(optional)
    • botFather send you your TOKEN.

    In PATH_FOLDER you must put the path of the folder that contains your set images (for run local with images in your pc)
    If you want to use images from the web you must set LOCAL = ‘false’, create a file named “url_images.txt” in the same folder of the main.py that contains the name of the image, and the url separated by “;”. importan: the url link provided must be a direct link to the file!

    In buttons, the name of the buttons, or labels for the image you sent. You can put all the buttons you want.
    In question, the question that you will send to next to image.

    1. Last, RUN the project and voila, the bot is running now.

    In the log.txt file the names of the images will be saved with their respective labels and in the finished file the images that have already been labeled so as not to label them twice

    The buttons will be shown in two columns, in case the number of buttons is odd, the first one will occupy the entire row

    Motivation

    It arose as a response to one of the great problems with image sets, we do not know which is which, or how to receive feedback from other people about the images we have in order to better label them.

    My Own bot

    Hi! I’m Diego and you can see my own bot CleanDirtyContainer_bot inspired by Rodrigo’s ML project on the classification of garbage containers in the Montevideo city clean-dirty-preprocess-baseline.

    Contributing

    Pull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.

    Next features in order of importance:

    • auto-delete cell images after tag
    • correct label when you failed
    • improve welcome and help message
    • implement internal buffer so that the images go faster
    • login users who tag
    • optional “skip image” button
    • log who tagged
    • verify that all buttons are different
    • support that only a closed group of users can tag
    • admit N labels for the same image, by N different people (that is, instead of a single person saying if it is a clean or dirty container, let N people do it, with N configurable)

    License

    MIT

    Visit original content creator repository
    https://github.com/diesilveira/labeller_img_python_telegram_BOT

  • Online-Shopping-Cart

    Online-Shopping-Cart

    Entities

    • User (userId, name, phoneNum)
    • Buyer (userId)
    • Seller (userId)
    • Bank Card (cardNumber, userId, bank, expiryDate)
    • Credit Card (cardNumber, organization)
    • Debit Card (cardNumber)
    • Store (sid, name, startTime, customerGrade, streetAddr, city, province)
    • Product (pid, sid, name, brand, type, amount, price, colour, customerReview, modelNumber)
    • Order Item (itemid, pid, price, creationTime)
    • Order (orderNumber, creationTime, paymentStatus, totalAmount)
    • Address (addrid, userid, name, city, postalCode, streetAddr, province, contactPhoneNumber)

    Relationships

    • Manage (userid, sid, SetupTime) (userid ref Seller, sid ref Store)
    • Save to Shopping Cart (userid, pid, quantity, addtime) (userid ref Buyer, pid ref Product)
    • Contain (orderNumber, itemid, quantity) (orderNumber ref Order, itemid ref Order Item)
    • Deliver To (addrid, orderNumber, TimeDelivered) (addrid ref Address, orderNumber ref Order)
    • Payment (C.cardNumber, orderNumber, payTime) (C.cardNumber ref Credit Card, orderNumber ref Order)

    Create Database

    Visit original content creator repository
    https://github.com/Sai-Adithya-717/Online-Shopping-Cart

  • ble-sniffer-walkthrough

    BLE Sniffer with Raspberry Pi – Reverse Engineering Walkthrough

    This guide uses a Raspberry Pi CM4 and a Junctek Bluetooth Battery Monitor as reference. But any BLE device can be reverse engineered to some degree. If it sends a steady stream of data, unencrypted, in a mix of hex code and decimals it will be very similar to this guide. Otherwise, further tinkering will be required.

    In order to get battery volts, amps, watts, and charging information off the Junctek Battery Monitor, I had to reverse engineer the data that was transmitted over BlueTooth, using gatt, and the ble_sniffer.py script included in this repo. Once I had a stream of bytes it had to be parsed, interpreted, and then converted it into readable information.

    Examining the Bytes and Logging – via Python Script

    1. Run the script and see a list of devices:
    python3 ble_sniffer.py
    
    1. A bunch of devices should start displaying. Look for one with the name like BTGXXX. Copy the mac address for the next step.
    2. Copy the ini file
    cp ble_config.ini.dist ble_config.ini
    
    1. Open ble_config.ini in a text editor and set mac_address to the value found in step 2. Leave notify_char_uuid blank for now.
    2. Run the script and connect to the device:
    python3 ble_sniffer.py
    
    1. There should be some results that look like this:

    [38:3b:26:79:df:37] Connected
    [38:3b:26:79:df:37] Resolved services
    [38:3b:26:79:df:37]  Service [0000ffe0-0000-1000-8000-00805f9b34fb]
    [38:3b:26:79:df:37]    Characteristic [0000ffe2-0000-1000-8000-00805f9b34fb]
    [38:3b:26:79:df:37]    Characteristic [0000ffe1-0000-1000-8000-00805f9b34fb]
    [38:3b:26:79:df:37]  Service [0000fff0-0000-1000-8000-00805f9b34fb]
    [38:3b:26:79:df:37]    Characteristic [0000fff3-0000-1000-8000-00805f9b34fb]
    [38:3b:26:79:df:37]    Characteristic [0000fff2-0000-1000-8000-00805f9b34fb]
    [38:3b:26:79:df:37]    Characteristic [0000fff1-0000-1000-8000-00805f9b34fb]
    [38:3b:26:79:df:37]  Service [0000180a-0000-1000-8000-00805f9b34fb]
    [38:3b:26:79:df:37]    Characteristic [00002a50-0000-1000-8000-00805f9b34fb]
    [38:3b:26:79:df:37]    Characteristic [00002a29-0000-1000-8000-00805f9b34fb]
    [38:3b:26:79:df:37]    Characteristic [00002a28-0000-1000-8000-00805f9b34fb]
    [38:3b:26:79:df:37]    Characteristic [00002a27-0000-1000-8000-00805f9b34fb]
    [38:3b:26:79:df:37]    Characteristic [00002a26-0000-1000-8000-00805f9b34fb]
    [38:3b:26:79:df:37]    Characteristic [00002a25-0000-1000-8000-00805f9b34fb]
    [38:3b:26:79:df:37]    Characteristic [00002a24-0000-1000-8000-00805f9b34fb]
    [38:3b:26:79:df:37]    Characteristic [00002a23-0000-1000-8000-00805f9b34fb]
    [38:3b:26:79:df:37]  Service [00001801-0000-1000-8000-00805f9b34fb]
    [38:3b:26:79:df:37]    Characteristic [00002a05-0000-1000-8000-00805f9b34fb]
    
    1. Press ctrl+c to stop once all Service and Characteristic uuids have been displayed.
    2. Using trial and error, subscribe to each different Characteristic uuids and see what info is returned. An Android/iOS app like nRF Connect can be used as well (see appendix below). Generally, the desired information will all be on one characteristic uuid.
    3. Set notify_char_uuid in ble_config.ini to one of the characteristic UUIDs above. Run python3 ble_sniffer.py again to see either:
    characteristic_enable_notifications_failed
    

    or

    characteristic_enable_notifications_succeeded
    

    If it successed there should be a stream of bytes like this:

    Got packet of len: 18 bb275481d5025674d20249d4230337f389ee
    Got packet of len: 18 bb275482d5025675d20251d4230338f300ee
    Got packet of len: 18 bb275483d5025676d20252d4230339f304ee
    Got packet of len: 18 bb275484d5025677d20253d4230340f314ee
    Got packet of len: 18 bb275485d5025678d20254d4230341f318ee
    Got packet of len: 10 bb025679d20256d406ee
    Got packet of len: 11 bb275486d5230342f304ee
    
    1. If bytes start streaming, take notes and write down what is being displayed on the battery monitor device screen. The devices displays this info, write it into a text editor:

    --11:53pm
    12.32v
    0.2a
    0.1a
    0.2a
    38.895Ah
    2.46w
    1.23w
    86% charge
    311h:09m
    
    1. While data is still streaming to the terminal, copy and paste the raw data into a text editor and start searching for values. Since the battery monitor was displaying 12.32v on screen, as shown in the notes in the last step, that data should be somewhere in the bytestreams. Search for 1232 (without the decimal). If the volt changes, search for the new value. Try this with other values like amps, watts, etc. If nothing is found, paste more stream data from the terminal. If still nothing is found, this is probably not the right Characteristic UUID. Press ctrl+c and go back to Step 8, using the next Characteristic UUID from the list in Step 6.
    2. Repeat this process until values from the notes are found within the byte streams. ie: Got packet of len: 9 bb1232c00246d850ee contains 1232 which was recorded in the notes above, representing 12.32volts. It contains 0246 which was also recorded in the notes above, representing 2.46w.
    3. This indicates that the script is listening to the correct Characteristic UUID. Continue recording, changing values on the battery monitor device by adding loads, charging, discharging, etc. Record any changes in the notes so that there is a good list of values to search for.
    4. Let it run for a few more minutes, then press ctrl+c. Copy/paste the stream from the terminal to the text file.

    Making Sense Of It All

    1. Here’s a sample set of bytes that were captured:

    Got packet of len: 9 bb1202c08414d867ee
    Got packet of len: 9 bb1645d21913d444ee
    Got packet of len: 9 bb1204c08548d822ee
    Got packet of len: 9 bb0700c18407d842ee
    Got packet of len: 9 bb0710c18541d817ee
    Got packet of len: 12 bb1204c00500c16020d843ee
    Got packet of len: 16 bb1205c00853d6022800d76025d813ee
    Got packet of len: 9 bb1206c08562d850ee
    Got packet of len: 9 bb1873d22187d416ee
    Got packet of len: 9 bb1849d22157d426ee
    Got packet of len: 9 bb1205c06025d851ee
    
    1. While capturing, the battery monitor showed the following:
    • volts: 12.02v, 12.04v, 12.05v, 12.06v
    • amps: 7.00a, 7.10a, 5.00a
    • watts: 84.14w, 85.62w, 60.25w
    • ah remaining: 1.645ah, 1.873ah, 1.849ah
    1. Start searching for values throughout the data. Look for the bytes before and after to see if there are any consistencies. Look at the length of bytes, if there are any values that are in every single byte stream. Check if multiple fields are in the samne bytestream (ie: are amps and volts sent at the same time). What is the byte length of each value, of the whole stream? Eventually things should start to make sense.

    2. From the info gathering in the previous section, comparing what the battery montior displays and then searching for the values in the byte stream, some patterns begin to emerge:

    • BB – every stream starts with
    • B1 – always comes after the battery capacity ah
    • EE – every stream ends with
    • C0 – always comes after voltage
    • C1 – always comes after amps
    • D2 – always comes after amp hours remaining
    • D3 – always comes after the total discharged today
    • D4 – always comes after the total charged today
    • D6 – always comes after time remaining
    • D8 – always comes after watts

    After analyse the data, we can realize that the device is converting UART messages to Bluetooth and if we check the manual of junctek battery monitor this all the data in the messages, the only thing to do is determine which is the corresponding one for each data

    Read sends command machine returns data Read Description
    Read all measurement values :R50 =1,2, 1, :r50=1,123,1198,1090,7421,2749,437,298,113,0,0,1,69,100,230208, 112418, 1 represents the communicationaddress; 123 represents the checksum; 1198 represents the voltageat 11.98V; 1090 represents the current at 10.90A; 7421 represents the remainingbattery capacity at 7.421Ah; 2749 represents the dischargeelectricity consumption at 2.749KWh; 437 represents the chargingelectricity consumption at 0.437KWh; 298 represents the operational record value at 298; 113 represents the environmental temperature at 13℃; 0 represents a function tobedetermined; 0 represents the output statusasON; (0-ON, 1-OVP, 2-OCP, 3-LVP, 4-NCP, 5-OPP, 6-OTP, 99-OFF) 1 represents the current direction, currently charging current; (0-discharge, 1-charging) 69 represents the remaining timeat 69 minutes; 100 represents the time adjustment (to be determined); 230208 represents the dateas February 8th, 2023; 12418 represents the timeas 11:24:18.
    Read all setting values :R51 =1,2, 1, :r51=1,69,2000,1000,2 000,3000,20000,120,5, 3,200,120,90,101,0,0,1,100,0,10000,1000,20, 20,80,0,4321,2, 1 represents communicationaddress; 69 represents checksum; 2000 represents overvoltageprotection set to 20.00V; 1000 represents undervoltageprotection set to 10.00V; 2000 represents over-dischargecurrent protection set to 20.00A; 3000 represents over-charge current protection set to 30.00A; 20000 represents over-power protection set to 200.00W; 120 represents over-temperatureprotection set to 20℃; 5 represents protection recoverytime set to 5s; 3 represents protection delay timeset to 3s; 200 represents preset batterycapacity set to 20.0Ah; 120 represents voltage calibrationfine-tuning with 20 tuning factors; (100 represents tuning factor 0) 90 represents current calibrationfine-tuning with -10 tuning factors; (100 represents tuning factor 0) 101 represents temperaturecalibration increase by 1℃; (100represents tuning factor 0) 0 represents undefined function; 0 represents normally openrelaytype; (0-normally open relay, 1-normally closed relay) 1 represents current multiplier set to1; (only applicable to Hall version) 100 represents time fine-tuningfunction (undefined); 0 represents data logging enabled; 10000 represents full charge voltageset to 100.00V; 1000 represents low battery voltageset to 10.00V; 20 represents full charge current value set to 20%; 20 represents monitoring timeset to2.0min; 80 represents low temperatureprotection set to -20℃; 0 represents current temperatureunit in Celsius; (1 representsFahrenheit) 4321 represents Bluetoothpassword set to 4321; 2 represents data logging withdatainterval of 3 seconds per record;

    I went through other hex values from A0 through FF, but couldn’t find anything usable. There were a number of incremental values, but couldn’t figure out if that was a clock, counter, or anything of use. In this case, getting volts, amps, watts, and ah remaining will be good enough. SoC (battery percentage) can be calculated by the aH remaining as a percentage of the aH of the battery. I couldn’t find out whether it transmits charging state (charging vs discharging), and amps always display as a positive number regardless of the direction the amps are flowing.

    1. Every device is different, but in the case of the Junctek battery monitor, it’s returning bytestreams of varying lengths (anywhere between 9 to 18 bytes). The values for each parameter are often in varying lengths as well (1 to 3 bytes). This means that the best way to parse the data is to break eat byte strem up into segments beginning with BB, then a value of a parameter, then the hex key that represents that parameter. Then another value, then another hex key, and so on until a checksum, then ending with EE.

    Examples:

    bb1202c08414d867ee:

    • C0 – volt – 1202 = 12.02 volts
    • D8 – watts – 8414 = 84.14 watts

    bb1204c00500c16020d843ee:

    • C0 – volt – 1204 = 12.04 volts
    • C1 – amps – 0500 = 05.00 amps
    • D8 – watts – 6020 = 60.20 watts

    bb1873d22187d416ee:

    • D2 – ah remaining – 1873 = 1.873 ah
    • D4 – total charged today

    bb1205c00853d6022800d76025d813ee:

    • C0 – volt – 1205 = 12.05 volts
    • D6 – time remaining – 0853 = 853 min or 15h:13m
    • D7 – ???
    • D8 – watts – 6025 = 60.25 watts

    Parsing and Returning Useful Information

    This can be done a million different ways, but to break up the bytestream into usable information this is what I did:

    params = {
        "voltage": "C0",
        "current": "C1",
        "dir_of_current": "D1",
        "ah_remaining": "D2",
        "discharge": "D3",		
        "charge": "D4",	
        "mins_remaining": "D6",
        "power": "D8",
        "temp": "D9"
    }
    battery_capacity_ah = 100 # use the B1 data??
    
    params_keys = list(params.keys())
    params_values = list(params.values())
    
    # split bs into a list of all values and hex keys
    bs_list = [bs[i:i+2] for i in range(0, len(bs), 2)]
    
    # reverse the list so that values come after hex params
    bs_list_rev = list(reversed(bs_list))
    
    values = {}
    # iterate through the list and if a param is found,
    # add it as a key to the dict. The value for that key is a
    # concatenation of all following elements in the list
    # until a non-numeric element appears. This would either
    # be the next param or the beginning hex value.
    for i in range(len(bs_list_rev)-1):
        if bs_list_rev[i] in params_values:
            value_str = ''
            j = i + 1
            while j < len(bs_list_rev) and bs_list_rev[j].isdigit():
                value_str = bs_list_rev[j] + value_str
                j += 1
    
            position = params_values.index(bs_list_rev[i])
    
            key = params_keys[position]
            values[key] = value_str
    
    # now format to the correct decimal place, or perform other formatting
    for key,value in list(values.items()):
        if not value.isdigit():
            del values[key]
    
        val_int = int(value)
        if key == "voltage":
            values[key] = val_int / 100
        elif key == "current":
            values[key] = val_int / 100
        elif key == "discharge":
            values[key] = val_int / 100000
        elif key == "charge":
            values[key] = val_int / 100000
        elif key == "dir_of_current":
            if value == "01":
                self.charging = True
            else:
                self.charging = False
        elif key == "ah_remaining":
            values[key] = val_int / 1000
        elif key == "mins_remaining":
            values[key] = val_int
        elif key == "power":
            values[key] = val_int / 100
        elif key == "temp":
            values[key] = val_int - 100
    
    # Display current as negative numbers if discharging
    if self.charging == False:
        if "current" in values:
            values["current"] *= -1
        if "power" in values:
            values["power"] *= -1
    
    # Calculate percentage
    if isinstance(battery_capacity_ah, int) and "ah_remaining" in values:
        values["soc"] = values["ah_remaining"] / battery_capacity_ah * 100
    
    # Append max capacity
    values["max_capacity"] = battery_capacity_ah
    
    # Now it should be formatted corrected, in a dictionary
    print(values)

    Logging and Visualization

    For this particular project, I wanted to visualize and log the data using grafana and prometheus:
    Check out my solar-bt-battery-monitor project here.

    I also wrote a plugin for Olen’s solar-monitor.

    Appendix A -Examining the Bytes and Logging – via nRF Connect

    An alternative to using trial and error to find which Characterist UUID contains the useful information:

    1. Install nRF Connect on iOS or Android. Open it up and connect to the Junctek device. It should be something like BTG004.
    2. Under the Client tab scroll through the Attribute Table section and hit the down arrow button on everything to start pulling values from the device. Subscribe to values with the down arrow with the line under it. This will start pulling a continuous stream of information.
    3. Open up a blank txt file and take notes of the values that appear on the Junctek device screen. Write down the time along with the volts, amps, Ah, SoC (state of charge), power, and time left:

    --11:53pm
    12.32v
    0.2a
    0.1a
    0.2a
    38.895Ah
    2.46w
    1.23w
    86% charge
    311h:09m
    
    1. Let it run for a minute or two and note any changes. Getting a few different values per parameter would be ideal. Add loads, turn things off, that sort of thing. Note it all.
    2. Go to the Log tab and export the data as text, open up in a text editor and start looking at all the results. It should look like this:

    Scanner On.
    Device Scanned.
    
    ...
    
    [Callback] peripheral(peripheral, didUpdateValueForCharacteristic: FFE1, error: nil)
    Updated Value of Characteristic FFE1 to 0xBB148815D5115232F369EE.
    "0xBB148815D5115232F369EE" value received.
    [Callback] peripheral(peripheral, didUpdateValueForCharacteristic: FFE1, error: nil)
    Updated Value of Characteristic FFE1 to 0xBB40C10491D809EE.
    "0xBB40C10491D809EE" value received.
    [Callback] peripheral(peripheral, didUpdateValueForCharacteristic: FFE1, error: nil)
    Updated Value of Characteristic FFE1 to 0xBB148816D5115233F371EE.
    "0xBB148816D5115233F371EE" value received.
    [Callback] peripheral(peripheral, didUpdateValueForCharacteristic: FFE1, error: nil)
    Updated Value of Characteristic FFE1 to 0xBB148817D5115234F373EE.
    "0xBB148817D5115234F373EE" value received.
    
    1. Look through the notes and start searching for values. ie: Ah readings of 38.895Ah, search for 38895:

    [Callback] peripheral(peripheral, didUpdateValueForCharacteristic: FFE1, error: nil)
    Updated Value of Characteristic FFE1 to 0xBB148623D5038895D2114920F352EE.
    
    1. Add that to the notes:

    ...
    38.895Ah - Updated Value of Characteristic FFE1 to 0xBB148623D5038895D2114920F352EE
    ...
    
    1. Repeat this process until all values in the notes have an associated Characteristic update. To find ‘time left’, convert hours to minutes.
    2. Start looking at the different characterists for a field, ie: volts or aH. Look for similarities in the bytes.
    3. Make a note of the Updated Value of Characteristic xxxx to 0xBBxxxxxxx block. In this case it always updates FFE1. This will be used in the next section.

    Credits

    Uses gatt-python

    Shout-outs to Olen’s solar-monitor project for some of the code in ble_sniffer.py that helps with discovery and reconnecting devices.

    Visit original content creator repository
    https://github.com/chriskomus/ble-sniffer-walkthrough

  • shuriken

    Shuriken Logo JavaScript Style Guide

    Shuriken

    A bot that gives out ninja start! (aka shuriken)

    based on botkit-starter-slack. The docs should contain helpful tips

    Set up your Slack Application

    Once you have setup your Botkit development environment, the next thing you will want to do is set up a new Slack application via the Slack developer portal. This is a multi-step process, but only takes a few minutes.

    Add a .env file (literally called .env) and add the following:

    # Environment Config
    
    clientId=
    clientSecret=
    PORT=   # defaults to 3000
    DEBUG=* # if you want to see all debug logs, remove if otherwise
    
    # note: .env is a shell file so there can’t be spaces around =
    

    Update the .env file with your newly acquired tokens.

    Launch your bot application:

    node .

    Now, visit your new bot’s login page: http://localhost:3000/login

    while developing, you can expose the port via ngrok If you want to see debug logs, run this bot as follows: DEBUG:* node .

    Deploying to cloud host

    if using the default filesystem storage, make sure your host allows filesystem modifications

    Customize Storage

    By default, this bot uses a simple file-system based storage mechanism to record information about the teams and users that interact with the bot. While this is fine for development, or use by a single team, most developers will want to customize the code to use a real database system.

    There are Botkit plugins for all the major database systems which can be enabled with just a few lines of code.

    We have enabled our Mongo middleware for starters in this project. To use your own Mongo database, just fill out MONGO_URI in your .env file with the appropriate information. For tips on reading and writing to storage, check out these medium posts

    Visit original content creator repository https://github.com/ahmed-musallam/shuriken
  • sign-segmentation

    Temporal segmentation of sign language videos

    This repository provides code for following two papers:

    [Project page]

    demo

    Contents

    Setup

    # Clone this repository
    git clone git@github.com:RenzKa/sign-segmentation.git
    cd sign-segmentation/
    # Create signseg_env environment
    conda env create -f environment.yml
    conda activate signseg_env

    Data and models

    You can download our pretrained models (models.zip [302MB]) and data (data.zip [5.5GB]) used in the experiments here or by executing download/download_*.sh. The unzipped data/ and models/ folders should be located on the root directory of the repository (for using the demo downloading the models folder is sufficient).

    Data:

    Please cite the original datasets when using the data: BSL Corpus | Phoenix14. We provide the pre-extracted features and metadata. See here for a detailed description of the data files.

    • Features: data/features/*/*/features.mat
    • Metadata: data/info/*/info.pkl

    Models:

    • I3D weights, trained for sign classification: models/i3d/*.pth.tar
    • MS-TCN weights for the demo (see tables below for links to the other models): models/ms-tcn/*.model

    The folder structure should be as below:

    sign-segmentation/models/
      i3d/
        i3d_kinetics_bsl1k_bslcp.pth.tar
        i3d_kinetics_bslcp.pth.tar
        i3d_kinetics_phoenix_1297.pth.tar
      ms-tcn/
        mstcn_bslcp_i3d_bslcp.model
    

    Demo

    The demo folder contains a sample script to estimate the segments of a given sign language video. It is also possible to use pre-extracted I3D features as a starting point, and only apply the MS-TCN model. --generate_vtt generates a .vtt file which can be used with our modified version of VIA annotation tool:

    usage: demo.py [-h] [--starting_point {video,feature}]
                   [--i3d_checkpoint_path I3D_CHECKPOINT_PATH]
                   [--mstcn_checkpoint_path MSTCN_CHECKPOINT_PATH]
                   [--video_path VIDEO_PATH] [--feature_path FEATURE_PATH]
                   [--save_path SAVE_PATH] [--num_in_frames NUM_IN_FRAMES]
                   [--stride STRIDE] [--batch_size BATCH_SIZE] [--fps FPS]
                   [--num_classes NUM_CLASSES] [--slowdown_factor SLOWDOWN_FACTOR]
                   [--save_features] [--save_segments] [--viz] [--generate_vtt]
    

    Example usage:

    # Print arguments
    python demo/demo.py -h
    # Save features and predictions and create visualization of results in full speed
    python demo/demo.py --video_path demo/sample_data/demo_video.mp4 --slowdown_factor 1 --save_features --save_segments --viz
    # Save only predictions and create visualization of results slowed down by factor 6
    python demo/demo.py --video_path demo/sample_data/demo_video.mp4 --slowdown_factor 6 --save_segments --viz
    # Create visualization of results slowed down by factor 6 and .vtt file for VIA tool
    python demo/demo.py --video_path demo/sample_data/demo_video.mp4 --slowdown_factor 6 --viz --generate_vtt

    The demo will:

    1. use the models/i3d/i3d_kinetics_bslcp.pth.tar pretrained I3D model to extract features,
    2. use the models/ms-tcn/mstcn_bslcp_i3d_bslcp.model pretrained MS-TCN model to predict the segments out of the features,
    3. save results (depending on which flags are used).

    Training

    Train ICASSP

    Run the corresponding run-file (*.sh) to train the MS-TCN with pre-extracted features on BSL Corpus. During the training a .log file for tensorboard is generated. In addition the metrics get saved in train_progress.txt.

    • Influence of I3D training (fully-supervised segmentation results on BSL Corpus)
    ID Model mF1B mF1S Links (for seed=0)
    1 BSL Corpus 68.68±0.6 47.71±0.8 run, args, I3D model, MS-TCN model, logs
    2 BSL1K -> BSL Corpus 66.17±0.5 44.44±1.0 run, args, I3D model, MS-TCN model, logs
    • Fully-supervised segmentation results on PHOENIX14
    ID I3D training data MS-TCN training data mF1B mF1S Links (for seed=0)
    3 BSL Corpus PHOENIX14 65.06±0.5 44.42±2.0 run, args, I3D model, MS-TCN model, logs
    4 PHOENIX14 PHOENIX14 71.50±0.2 52.78±1.6 run, args, I3D model, MS-TCN model, logs

    Train CVPRW

    Requirement: pre-extracted pseudo-labels/ changepoints or CMPL-labels:

    1. Save pre-trained model in models/ms-tcn/*.model
    2. a) Extract pseudo-labels before extracting CMPL-labels: Extract only PL | Extract CMPL | Extract PL and CMPL b) Extract Changepoints separately for training: Extract CP -> specify correct model path
    • Pseudo-labelling techniques on PHOENIX14
    ID Method Adaptation protocol mF1B mF1S Links (for seed=0)
    5 Pseudo-labels inductive 47.94±1.0 32.45±0.3 run, args, I3D model, MS-TCN model, logs
    6 Changepoints inductive 48.51±0.4 34.45±1.4 run, args, I3D model, MS-TCN model, logs
    7 CMPL inductive 53.57±0.7 33.82±0.0 run, args, I3D model, MS-TCN model, logs
    8 Pseudo-labels transductive 47.62±0.4 32.11±0.9 run, args, I3D model, MS-TCN model, logs
    9 Changepoints transductive 48.29±0.1 35.31±1.4 run, args, I3D model, MS-TCN model, logs
    10 CMPL transductive 53.53±0.1 32.93±0.9 run, args, I3D model, MS-TCN model, logs

    Citation

    If you use this code and data, please cite the following:

    @inproceedings{Renz2021signsegmentation_a,
        author       = "Katrin Renz and Nicolaj C. Stache and Samuel Albanie and G{\"u}l Varol",
        title        = "Sign Language Segmentation with Temporal Convolutional Networks",
        booktitle    = "ICASSP",
        year         = "2021",
    }
    
    @inproceedings{Renz2021signsegmentation_b,
        author       = "Katrin Renz and Nicolaj C. Stache and Neil Fox and G{\"u}l Varol and Samuel Albanie",
        title        = "Sign Segmentation with Changepoint-Modulated Pseudo-Labelling",
        booktitle    = "CVPRW",
        year         = "2021",
    }
    

    License

    The license in this repository only covers the code. For data.zip and models.zip we refer to the terms of conditions of original datasets.

    Acknowledgements

    The code builds on the github.com/yabufarha/ms-tcn repository. The demo reuses parts from github.com/gulvarol/bsl1k. We like to thank C. Camgoz for the help with the BSLCORPUS data preparation.

    Visit original content creator repository https://github.com/RenzKa/sign-segmentation
  • backup-gpg-keys

    Export GPG Keys

    📔 How to Use It?

    The script exports GPG keys, making it quicker and more convenient for the user to migrate them to another machine or system. I made it for personal use but you are welcome to use it as well if you would like to.
    In this video, you’ll find a step-by-step demonstration of how to use it. Although the usage itself is self-evident, I believe it is better to be seen and understood beforehand.

    Demo: https://youtu.be/rX2tKTZVeZA?si=0qQCc3VDuzirndFb

    ❓ What is GPG?

    GPG, or GNU Privacy Guard, is a free and open-source software tool that provides encryption and digital signature functionality based on the OpenPGP (Pretty Good Privacy) standard. Its primary applications include:

    • Encryption: GPG allows users to encrypt data, such as emails, files, and text messages, to ensure that only intended recipients with the appropriate decryption key can access the information. This is particularly crucial for protecting sensitive or confidential data from unauthorized access.
    • Digital Signatures: GPG enables users to create digital signatures for data, verifying the authenticity and integrity of the information. Digital signatures help ensure that data has not been tampered with during transmission and that it originates from the expected sender.
    • Key Management: GPG provides tools for generating, managing, and distributing cryptographic keys used for encryption and digital signatures. This includes creating key pairs, importing/exporting keys, and revoking compromised keys.

    GPG is used in Git and GitHub primarily for ensuring the authenticity and integrity of commits and tags. Here’s why it’s used in these contexts:

    • Commit Signing: Developers can sign their Git commits using GPG keys. This allows others to verify that the commits were made by the stated author and have not been altered since being signed. Commit signing is essential for maintaining the trustworthiness of version control histories, especially in collaborative software development environments.
    • Tag Signing: Similarly, GPG can be used to sign Git tags. Tags are commonly used to mark specific points in a repository’s history, such as release versions. By signing tags with GPG keys, developers can ensure the authenticity and integrity of these important milestones.
    • Code Integrity: Integrating GPG with Git and GitHub enhances code integrity by enabling developers to cryptographically sign their contributions. This helps prevent unauthorized changes, malicious code injections, and ensures that contributions are traceable to legitimate sources.

    In summary, GPG is used in Git and GitHub to provide cryptographic security measures such as commit and tag signing, which help maintain the authenticity, integrity, and trustworthiness of version-controlled code repositories.

    Visit original content creator repository
    https://github.com/zEuS0390/backup-gpg-keys

  • SVN-Hook-Tools

    SVN Hook Tools

    Presentation

    The SVN Hook Tools is a rule engine designed to quickly and easily add task on Subversion repository hooks.
    For each hook is associated a rule-set where each rule contains a condition and a list of actions to perform. For example, you may block a commit without a comment, allow log message edition from some users only, or even execute programs or Web requests after creating a tag.

    Installation

    Prerequisites

    The SVN Hook Tools requires a 1.7 JVM.

    Deployement

    To deploy the tool, you will need to copy the svn-hook-tools.jar and its config directory at the same location, for example the hook directory of the repository.

    Call the tool

    To call the tool, edit the respository hook shell scripts you want to create tasks and add a the following call to the SVN Hook Tools for Linux:

    java -jar svn-hook-tools.jar $(basename $0) $@

    or for Windows:

    java -jar svn-hook-tools.jar %~n0 %*

    The arguments needed to call the tool are the hook name and the hook script arguments.

    Setup

    Logging setup

    The SVN Hook Tools uses a java.util.logging logger. You may configure the logging feature using the configuration file available in config/logging.properties. The log will be really useful during the rule definition process to retrieve configuration errors. By default, the tool writes a human readable svn-hook-tools.log file in the user home directory.

    Condition and action binding

    The tool comes with built-in conditions and actions. Nevertheless, you may also develop and add you own condition or action and use it throw dynamic class loading. The configuration file responsive for loading and binding classes could be found in config/bindings.properties.

    Built-in conditions

    The tool comes with some built-in conditions. Here is a summary table of conditions.

    Name Description Parameters
    all An operator condition valid if all nested conditions are valid.
    any An operator condition valid if at least one of nested condition is valid.
    author A condition that test the Subversion user name. name: mandatory, the name to check,
    nameComparison: IS by default, the name comparison method.
    emptyCommitLog A condition valid if the commit log message is empty.
    minLengthCommitLog A condition valid if the commit log message length is greater than the requested length. length: mandatory, the minimum length of the commit log message.
    patternCommitLog A condition valid if the commit log message valid the given pattern. pattern: mandatory, the pattern of the commit log message.
    resource A condition valid if any resource operation valid all related resource filters. See resource filters description below.
    not An operator condition valid if the nested condition is not valid.

    The resource condition must include resource filters to be validated. Here is a summary table of resource filters.

    Name Description Parameters
    FileExtension A resource filter based on file extension. fileExtension: mandatory, the file extension to check,
    fileExtensionComparison: IS by default, the file extension comparison method.
    FileName A resource filter based on file name. fileName: mandatory, the file name to check.
    FileLocation A resource filter based on resource location. type: no check by default, the location type to check (ROOT_LOCATION, TRUNK_LOCATION, BRANCHES_LOCATION, TAGS_LOCATION_, A_BRANCH_LOCATION_, A_TAG_LOCATION, IN_TRUNK_LOCATION, IN_A_BRANCH_LOCATION, IN_A_TAG_LOCATION, IN_ROOT_LOCATION),
    projectName: no check by default, the project name to check,
    projectNameComparison: IS by default, the project name comparison method,
    branchName: no check by default, the branch name to check,
    branchNameComparison: IS by default, the branch name comparison method,
    tagName: no check by default, the tag name to check,
    tagNameComparison: IS by default, the tag name comparison method,
    path: no check by default, the path to check,
    pathComparison: CONTAINS by default, the path comparison method.
    Operation A resource filter based on the operation done on the resource. operation: mandatory, the operation to check (ADDED, COPIED, DELETED, UPDATED, PROPERTY_CHANGED, LOCK, UNLOCK).
    PropertyChange A resource filter based on resource property change. name: mandatory, the name of the property to check,
    oldValue: no check by default, the old value of the property to check,
    oldValueComparison: IS by default, the old value of the property comparison method,
    newValue: no check by default, the new value of the property to check,
    newValueComparison: IS by default, the new value of the property comparison method.
    FileType A resource filter based on resource type. type: mandatory, the type of the resource to check (DIRECTORY or FILE).

    Built-in actions

    The tool comes with some built-in actions. Here is a summary table of actions.

    Name Description Parameters
    error An action that raise an error to the Subversion client. code: mandatory, the error code to return,
    message: the error message to send.
    exec An action that execute a program. command: mandatory, the program command to execute,
    parameters: empty by default, the program parameters to pass,
    waitFor: FALSE by default, the flag to define if the engine should wait for the terminaison of the executed program.
    log An action that write a log entry. message: mandatory, the log message,
    level: INFO by default, the log level (FINEST, FINER, FINE, CONFIG, INFO, WARNING_ or SEVERE).
    request An action that make a HTTP request. url: mandatory, the URL to request,
    type: GET by default, the request type (POST or GET),
    headers: the request headers,
    data: the request data to send.

    Rules declaration

    A rule-set for a hook is described in a XML file. The name of the file must be the same of the hook script suffixed by -rules.xml and be located in the config directory, config/pre-commit-rules.xml for example.
    The root node should be a ruleset node containing one rule node for each task you want to make. A rule node must have a name attribute to describe it.

    <rule-set>
    	<rule name="Empty commit log">
    	</rule>
    	<rule name="Block in trunk modification">
    	</rule>
    </rule-set>

    The rule node may contains a condition node. If the condition node is missing, the rule actions will always be triggered with the hook. The condition node must have a type attribute to define the kind of condition. All available condition types are defined in the config/bindings.properties configuration file.

    <rule-set>
    	<rule name="Empty commit log">
    		<condition type="emptyCommitLog" />
    	</rule>
    	<rule name="Block in trunk modification">
    		<condition type="resource">
    			<filter type="location">
    				<parameter name="type">IN_TRUNK_LOCATION</parameter>
    			</filter>
    		</condition>
    	</rule>
    </rule-set>

    If you want to add more than one condition, you may imbricate condition nodes in operator typed condition (all, any, not). For example, if you want to block in trunk modification for all user except admin, you will have the following condition imbrication.

    <rule-set>
    	<rule name="Empty commit log">
    		<condition type="emptyCommitLog" />
    	</rule>
    	<rule name="Block non admin in trunk modification">
    		<condition type="all">
    			<condition type="not">
    				<condition type="author">
    					<parameter name="name" value="admin" />
    				</condition>
    			</condition>
    			<condition type="resource">
    				<filter type="location">
    					<parameter name="type">IN_TRUNK_LOCATION</parameter>
    				</filter>
    			</condition>
    		</condition>
    	</rule>
    </rule-set>

    After defining your condition, add any action node for each task you want to perform if the condition is met. The action node must have a type parameter. All available action types are defined in the config/bindings.properties configuration file.

    <rule-set>
    	<rule name="Empty commit log">
    		<condition type="emptyCommitLog" />
    		<action type="error">
    			<parameter name="code">-10</parameter>
    			<parameter name="message">The commit message could not be empty.</parameter>
    		</action>
    	</rule>
    	<rule name="Block in trunk modification">
    		<condition type="resource">
    			<filter type="location">
    				<parameter name="type">IN_TRUNK_LOCATION</parameter>
    			</filter>
    		</condition>
    		<action type="error">
    			<parameter name="code">-11</parameter>
    			<parameter name="message">Forbidden to touch trunk.</parameter>
    		</action>
    	</rule>
    </rule-set>

    Doing so, you will define your entire hook rule-set. Each rule performs independently in the sequential order.

    Extending the tool

    The SVN Hook Tools may be extended by developping conditions and actions. Create a Java project with svn-hook-tools.jar dependency and extend the following classes:

    Class Purpose
    action.AbstractAction Create a new action.
    condition.AbstractCondition Create a new condition.
    condition.operator.AbstractGroupCondition Create a new operator condition.
    condition.resource.filter.AbstractResourceFilter Create a new resource filter (for resource condition).

    Each child of those classes may have parameters. Add them public member annoted with @ConfigurationParameter to dynamically load and set parameter value on rule-set parsing. The class field name must match the name attribute value of the related parameter node. Once the child classes done, export you project as jar and add it to the callpath when you call the tool. To be able to use your new conditions and actions, you must declare theirs types in the config/bindings.properties (a mapping between type names and fully qualified class names).

    Visit original content creator repository
    https://github.com/PerfectSlayer/SVN-Hook-Tools

  • solana-signature-verification

    Solana Program: Unlocking a Vault Based on SOL Price and Signature Verification

    This Solana escrow program allows users to withdraw funds only when the SOL price reaches a certain target and after verifying their Ed25519 signature.

    Ed25519 Signature Verification

    In Solana, programs cannot directly call the Ed25519 program using a CPI (Cross-Program Invocation) because signature verification is computationally expensive. Instead, the Ed25519 signature verification program exists as a precompiled instruction outside the Solana Virtual Machine (SVM).
    we do verification by passing two instructions one Ed25519 program ix and second our custom logic ix (it mush have a sysvar ix to get current chain state)

    The sysvar instructions account provides access to all instructions within the same transaction.
    This allows our program to fetch and verify the arguments passed to the Ed25519 program, ensuring they were correctly signed before unlocking funds.

    Vault Unlock Conditions

    The SOL price must meet or exceed the target threshold & Ed25519 signature must be verified

    Vault Architecture

    flowchart TD
        %% Style Definitions - GitHub Monochrome Colors
        classDef darkBackground fill:#24292e,stroke:#1b1f23,stroke-width:2,color:#ffffff,font-size:24px
        classDef boxStyle fill:#2f363d,stroke:#1b1f23,stroke-width:2,color:#ffffff,font-size:22px
        classDef subBoxStyle fill:#444d56,stroke:#1b1f23,stroke-width:2,color:#ffffff,font-size:20px
        classDef lighterBoxStyle fill:#586069,stroke:#1b1f23,stroke-width:2,color:#ffffff,font-size:20px
    
        %% User Entry Points
        subgraph UserActions["User Actions"]
            direction TB
            class UserActions darkBackground
            Deposit["Deposit SOL + Ed25519 Sig"]
            Withdraw["Withdraw Request + Ed25519 Sig + Feed ID"]
        end
    
        %% Program Logic
        subgraph ProgramFlow["Escrow Program"]
            class ProgramFlow boxStyle
            
            %% Signature Verification
            subgraph SigVerification["Ed25519 Signature Verification"]
                class SigVerification subBoxStyle
                GetPrevIx["Get Previous Instruction"]
                VerifyProgram["Verify Ed25519 Program ID"]
                
                subgraph OffsetValidation["Signature Offset Validation"]
                    class OffsetValidation lighterBoxStyle
                    ValidatePK["Validate Public Key Offset"]
                    ValidateSig["Validate Signature Offset"]
                    ValidateMsg["Validate Message Data"]
                    VerifyIndices["Verify Instruction Indices Match"]
                end
            end
    
            %% Main Operations
            subgraph Operations["Program Operations"]
                class Operations subBoxStyle
                
                subgraph DepositFlow["Deposit Handler"]
                    class DepositFlow lighterBoxStyle
                    UpdateState["Update Escrow State:
                    - Set unlock_price
                    - Set escrow_amount"]
                    TransferToEscrow["Transfer SOL to Escrow Account"]
                end
    
                subgraph WithdrawFlow["Withdraw Handler"]
                    class WithdrawFlow lighterBoxStyle
                    GetPrice["Get Price from Pyth"]
                    PriceCheck["Check if price > unlock_price"]
                    TransferToUser["Transfer SOL to User"]
                end
            end
        end
    
        %% Flow Connections
        Deposit --> GetPrevIx
        Withdraw --> GetPrevIx
        GetPrevIx --> VerifyProgram
        VerifyProgram --> OffsetValidation
        ValidatePK & ValidateSig & ValidateMsg --> VerifyIndices
        
        VerifyIndices -->|"Signature Valid"| Operations
        VerifyIndices -->|"Invalid"| Error["Return Signature Error"]
        
        Operations --> DepositFlow
        Operations --> WithdrawFlow
        
        GetPrice --> PriceCheck
        PriceCheck -->|"Price > Unlock Price"| TransferToUser
        PriceCheck -->|"Price <= Unlock Price"| WithdrawError["Return Invalid Withdrawal Error"]
    
        %% Apply Styles
        class Deposit,Withdraw boxStyle
        class GetPrevIx,VerifyProgram,Error,WithdrawError subBoxStyle
        class UpdateState,TransferToEscrow,GetPrice,CheckAge,PriceCheck,TransferToUser lighterBoxStyle
    



    Loading


    Imp Links

    making client request from pyth https://github.com/pyth-network/pyth-crosschain/tree/main/target_chains/solana/sdk/js/pyth_solana_receiver
    use rpc-websockets 7.11.0 https://stackoverflow.com/questions/78566652/solana-web3-js-cannot-find-module-rpc-websockets-dist-lib-client

    Visit original content creator repository
    https://github.com/mubarizkyc/solana-signature-verification

  • ComboPicker

    ComboPicker

    ComboPicker is a SwiftUI view that allows users to input a value by selecting from a predefined set or by typing a custom one.

    ComboPicker

    Installation

    ComboPicker is available through Swift Package Manager.

    .package(url: "https://github.com/MrAsterisco/ComboPicker", from: "<see GitHub releases>")

    Latest Release

    To find out the latest version, look at the Releases tab of this repository.

    Usage

    ComboPicker can display any type that conforms to the ComboPickerModel protocol. The following example shows a model that wraps a Int:

    public struct ExampleModel: ComboPickerModel {
      public static func ==(lhs: ExampleModel, rhs: ExampleModel) -> Bool {
        lhs.value == rhs.value
      }
    
      public let id = UUID()
      public let value: Int
      
      // Default initializer.
      public init(value: Int) {
        self.value = value
      }
      
      // Initializer to convert user input into a value.
      public init?(customValue: String) {
        guard let doubleValue = NumberFormatter().number(from: customValue)?.intValue else { return nil }
        self.init(value: doubleValue)
      }
      
      // Convert the value to prefill the manual input field.
      public var valueForManualInput: String? {
        NumberFormatter().string(from: .init(value: value))
      }
    }

    You also have to provide an implementation of ValueFormatterType, so that the ComboPicker knows how to represent values in the Pickers. The following example illustrates a simple formatter for the model implemented above:

    final class ExampleModelFormatter: ValueFormatterType {
      func string(from value: ExampleModel) -> String {
        "# \(NumberFormatter().string(from: .init(value: value.value)) ?? "")"
      }
    }

    Once you have a collection of models and the formatter implementation, building a ComboPicker is easy:

    @State private var content: [ExampleModel]
    @State private var selection: ExampleModel
    
    ComboPicker(
      title: "Pick a number",
      manualTitle: "Custom...",
      valueFormatter: ExampleModelFormatter(),
      content: $content,
      value: $selection
    )

    Platform Behaviors

    ComboPicker adapts to the platform to provide an easy and accessible experience regardless of the device.

    iOS & iPadOS

    On iOS and iPadOS, the ComboPicker shows a one-line UIPickerView that the user can scroll. If the user taps on it, a text field for manual input appears.

    ComboPicker

    If necessary, you can customize the keyboard type for the manual input field:

    .keyboardType(.numberPad)

    Note: because of limitations of the SwiftUI Picker regarding the gestures handling, as well as the ability of showing and using multiple wheel pickers in the same screen, ComboPicker is currently relying on a UIViewRepresentable implementation of a UIPickerView. You can read more about the current limitations here.

    watchOS

    On watchOS, the ComboPickershows a normal Picker that the user can scroll using their fingers or the digital crown. If the user taps on it, a text field for manual input appears.

    ComboPicker

    There is no support for specifying the keyboard type, at the moment, as Apple doesn’t provide a way to do so on watchOS.

    macOS

    On macOS, the ComboPicker becomes an NSComboBox. Users will be able to select options or type custom ones directly into the component.

    See the Apple docs for further information on how combo boxes work.

    tvOS

    On tvOS, the ComboPicker shows a Picker followed by a TextField. The user can move on the picker or scroll down to the text field and input a custom value.

    ComboPicker

    If necessary, you can customize the keyboard type for the manual input field:

    .keyboardType(.numberPad)

    Compatibility

    ComboPicker requires iOS 15.0 or later, macOS 12.0 or later, watchOS 8.0 or later and tvOS 15.0 or later.

    Contributions

    All contributions to expand the library are welcome. Fork the repo, make the changes you want, and open a Pull Request.

    If you make changes to the codebase, I am not enforcing a coding style, but I may ask you to make changes based on how the rest of the library is made.

    Status

    This library is under active development. Even if most of the APIs are pretty straightforward, they may change in the future; but you don’t have to worry about that, because releases will follow Semantic Versioning 2.0.0.

    License

    ComboPicker is distributed under the MIT license. See LICENSE for details.

    Visit original content creator repository https://github.com/MrAsterisco/ComboPicker