Blog

  • Sommatore_IEEE754

    Prova Finale di Reti Logiche – 2023

    Valutazione 28 / 30

    Specifica

    L’obbiettivo del progetto è l’implementazione di un sommatore floating point secondo lo standard IEEE754, comprensivo dei numeri speciali NaN e infinity, che calcoli correttamente la somma e la sottrazione esclusivamente di numeri normalizzati. Il sommatore deve essere realizzato mediante una architettura pipelined, scegliendo la suddivisione in stadi il più possibile bilanciati.
    Una volta realizzato il componente, è richiesto di realizzare un test-bench per la simulazione e la verifica del corretto funzionamento nei diversi casi.

    Implementazione

    Il dispositivo utilizza una pipeline a tre stadi i cui ritardi sono stati stimati rispettivamente a 25.4 ns, 27.0 ns e 25.3 ns, per questo si è deciso di utilizzare un segnale di clock di 30 ns, di seguito una rappresentazione semplificata degli stadi della pipeline:
    alt text Il primo stadio prende gli input del top-level produce i numeri X e Y normalizzati, DNORMX corrisponde sempre al numero più piccolo e DNORMY al numero più grande, nel caso il numero più piccolo sia negativo sarà prodotto il suo complemento a uno (a cui verrà sommato 1 nel prossimo stadio della pipeline per produrre il complemento a due), il numero più grande viene invece eventualmente inoltrato in complemento a due. Gli input del top-level sono inoltre analizzati dal modulo CLU per produrre i segnali PINF, NINF e NAN (tutti da un solo bit) che indicano se l’output dell’operazione è un numero speciale, è importante notare che questi segnali sono one-hot, non è infatti possibile che il risultato possa essere allo stesso tempo due numeri speciali diversi. Questi segnali possono valere 1 solo se almeno uno degli input del top-level è un numero speciale. Viene anche inoltrato allo stadio successivo l’esponente del numero più grande.
    Il secondo stadio si occupa di eseguire la somma tra le mantisse dei numeri denormalizzati e di produrre un segnale su 24 bit che rappresenta la mantissa denormalizzata (comprensiva dell’1 omesso nella rappresentazione normalizzata) in notazione unsigned. Vengono poi inoltrati allo stadio successivo il segnale SIGN da un solo bit dal valore 1 se il risultato dell’operazione è negativo, 0 altrimenti (non è previsto un valore fisso per SIGN nel caso il risultato sia zero) e il segnale INCR da un bit che indica se a causa di un overflow nella somma sarà necessario incrementare l’esponente del risultato. I segnali non utilizzati sono inoltrati allo stadio successivo.
    Il terzo stadio si occupa in primo luogo di normalizzare il risultato ottenuto allo stadio precedente shiftando la mantissa M a destra o a sinistra in base all’output di un priority encoder e sottraendo al segnale EXP il numero di bit di cui la mantissa viene shiftata verso destra. Dopo aver ottenuto un numero normalizzato si controlla in base ai valori dei segnali PINF, NINF e NAN se il risultato deve essere modificato a causa di un operando speciale. Il segnale SIGN viene copiato nel bit più significativo del risultato per indicare il segno.
    È bene ricordare che gli ingressi e le uscite sono campionate ogni 30 ns, l’output corretto sarà quindi disponibile solo dopo quattro cicli di clock.
    Per ulteriori dettagli sull’implementazione e sulla funzione dei test bench, consultare la documentazione.

    Utilizzo

    Per testare il funzionamento del dispositivo è necessario creare un nuovo progetto sulla piattaforma Xilinx ISE, successivamente dopo aver aggiunto tutti i file .vhd al progetto (non è necessario aggiungere i test bench, il cui nome termina per _TB) si potrà procedere alla creazione di nuovi test bech per verificare il corretto funzionamento del dispositivo.
    Si noti che l’implementazione presente è stata progettata e testa per un dispositivo Spartan6 XC6slx75T del package FGG676 con velocità -4, l’utilizzo di dispositivi diversi potrebbe non essere supportata.

    Membri del gruppo

    Visit original content creator repository https://github.com/SigCatta/Sommatore_IEEE754
  • The-loss-optimisation-of-loan-recovery-decision-times-using-forecast-cash-flows

    The loss optimisation of loan recovery decision times using forecast cash flows

    DOI

    A theoretical method is empirically illustrated in finding the best time to forsake a loan such that the overall credit loss is minimised. This is predicated by forecasting the future cash flows of a loan portfolio up to the contractual term, as a remedy to the inherent right-censoring of real-world `incomplete’ portfolios. Two techniques, a simple probabilistic model as well as an eight-state Markov chain, are used to forecast these cash flows independently. We train both techniques from different segments within residential mortgage data, provided by a large South African bank, as part of a comparative experimental framework. As a result, the recovery decision’s implied timing is empirically illustrated as a multi-period optimisation problem across uncertain cash flows and competing costs. Using a delinquency measure as a central criterion, our procedure helps to find a loss-optimal threshold at which loan recovery should ideally occur for a given portfolio. Furthermore, both the portfolio’s historical risk profile and forecasting thereof are shown to influence the timing of the recovery decision. This work can therefore facilitate the revision of relevant bank policies or strategies towards optimising the loan collections process, especially that of secured lending.

    Structure

    This R-codebase can be run sequentially using the file numbering itself as a structure. Delinquency measures are algorithmically defined in DelinqM.R as data-driven functions, which may be valuable to the practitioner outside of the study’s current scope. Note that scripts 3.2 and 3.3 are interactive scripts wherein the so-called Loss-based Recovery Optimasation across Delinquency (or LROD) procedure is repeatedly run by executing the script with different settings, as set out in the comments. Each independent run produces results that are saved for graphing later on (see the graph-varients of each numbered script where relevant).

    Data

    This R-codebase assumes that monthly loan performance data are available. Naturally, the data itself can’t be made publically available due to non-disclosure agreements signed with the particular bank in question. However, the structure and type of data that is required for reproducing this study, is sufficiently described in the commentary within the scripts. This should enable the practitioner to extract and prepare data accordingly.

    Copyright

    All code and scripts are hereby released under an MIT license. Similarly, all graphs produced by relevant scripts as well as those published here, are hereby released under a Creative Commons Attribution (CC-BY 4.0) licence.

    Visit original content creator repository https://github.com/arnobotha/The-loss-optimisation-of-loan-recovery-decision-times-using-forecast-cash-flows
  • The-loss-optimisation-of-loan-recovery-decision-times-using-forecast-cash-flows

    The loss optimisation of loan recovery decision times using forecast cash flows

    DOI

    A theoretical method is empirically illustrated in finding the best time to forsake a loan such that the overall credit loss is minimised. This is predicated by forecasting the future cash flows of a loan portfolio up to the contractual term, as a remedy to the inherent right-censoring of real-world `incomplete’ portfolios. Two techniques, a simple probabilistic model as well as an eight-state Markov chain, are used to forecast these cash flows independently. We train both techniques from different segments within residential mortgage data, provided by a large South African bank, as part of a comparative experimental framework. As a result, the recovery decision’s implied timing is empirically illustrated as a multi-period optimisation problem across uncertain cash flows and competing costs. Using a delinquency measure as a central criterion, our procedure helps to find a loss-optimal threshold at which loan recovery should ideally occur for a given portfolio. Furthermore, both the portfolio’s historical risk profile and forecasting thereof are shown to influence the timing of the recovery decision. This work can therefore facilitate the revision of relevant bank policies or strategies towards optimising the loan collections process, especially that of secured lending.

    Structure

    This R-codebase can be run sequentially using the file numbering itself as a structure. Delinquency measures are algorithmically defined in DelinqM.R as data-driven functions, which may be valuable to the practitioner outside of the study’s current scope. Note that scripts 3.2 and 3.3 are interactive scripts wherein the so-called Loss-based Recovery Optimasation across Delinquency (or LROD) procedure is repeatedly run by executing the script with different settings, as set out in the comments. Each independent run produces results that are saved for graphing later on (see the graph-varients of each numbered script where relevant).

    Data

    This R-codebase assumes that monthly loan performance data are available. Naturally, the data itself can’t be made publically available due to non-disclosure agreements signed with the particular bank in question. However, the structure and type of data that is required for reproducing this study, is sufficiently described in the commentary within the scripts. This should enable the practitioner to extract and prepare data accordingly.

    Copyright

    All code and scripts are hereby released under an MIT license. Similarly, all graphs produced by relevant scripts as well as those published here, are hereby released under a Creative Commons Attribution (CC-BY 4.0) licence.

    Visit original content creator repository https://github.com/arnobotha/The-loss-optimisation-of-loan-recovery-decision-times-using-forecast-cash-flows
  • Common-Lisp-Tangram-Solver

    Common-Lisp-Tangram-Solver

    A Tangram Puzzle Solver in Common Lisp that is capable of solving arbitrary geometric tiling problems, utilizing CLIM (Common Lisp Interface Manager) for its GUI.

    tiles

    About

    This was written in Common Lisp in 2003, using CLIM (Common Lisp Interface Manager) for its GUI. It compiles with LispWorks 6.1 and 7.1 with CLIM, on Windows, Linux, and Mac. I can only support Windows and Linux, but it is known to run on a M1 Silicon Mac as well (see below).

    Here is a 2003 poster about the project, in German, and here my European Lisp Symposium ’21 paper, as well as the slides of the talk.

    poster

    Pleae check out the YouTube video to see it in action!, or try one of the supplied executables to run on your own machine!

    On my 2012 iCore 7 2.4 GZ 8 GB Ubuntu Xenial PC, thanks to its geometric search heuristics, it takes ~ 10 seconds to solve the following problem:

    editor1 editor2 thinking solutions

    The solver uses geometric search:

    search

    This is work in progress. The heuristics are being changed from time to time. Currently, with the above machine configuration, it takes about ~3 minutes and 50 seconds for it to solve the standard Tangram problem if “Show Thinking” is unchecked, and ~6 minutes and 30 seconds with “Show Thinking” enabled:

    Standard Solution 1 Standard Solution 2

    Other problems are much faster. The solver frequently finds innovative solutions. Problems in .tan format are src/problems folder.

    Ignat Ignat 2

    History

    The program was written in 2003. I resurrected the code in March 2021 and fixed some major bugs that were responsible for severe incompleteness. It is performing much better by now!

    The geometric substrate support functions were written by the author during his computer science master and PhD endeavors, from 1996 to 2003.

    Installation (Windows, Linux, Mac)

    You will need LisWorks (6.1, 7.1) with CLIM if you want to run it from source, or even build the application. Else, exectuables are provided for Linux32 and Windows (see below).

    On Linux, I needed to install a number of (in my case, 32bit) legacy libraries, like (32bit) Motif etc. in order to get is going. It is much more straightforward with Windows (tested on 64bit Windows 10).

    The well-known Lisp hacker Lispm (Rainer Joswig) reported that it runs fine on an Apple Silicon Mac with M1 processor under 64bit ARM Linux..

    Executables

    Here are the provided executables for Linux32 and Windows.

    Loading / Usage

    If you want to build it from source, check out the src/tangram-sysdcl.lisp. Adjust the logical "tangram" pathname to match your environment. Then, as shown at the end of the file, do a

    (compile-system "tangram")
    (load-system "tangram")
    (tangram::tangram)
    
    

    There are also deliver.lisp scripts provided for Linux32 and Windows.

    For the Mac, you’ll probably have to collect the “FLI Templates”, and then change the tangram-sysdcl.lisp accordingly:

    (define-system tangram
        (:default-pathname "tangram:")
        (:serial "tangram-packages"
       #+:linux
       "tangram-templates"
       tangram-aux tangram-persistence tangram-geometry
       tangram-main tangram-gui))
    

    The process of “FLI template collection” is described in the LispWorks manual.

    Lisp hacker Lispm (Rainer Joswig) has successfully delivered the application with his Apple Silicon Mac with M1 processor under 64bit ARM Linux..

    Acknowledgements

    Thanks to Rainer Joswig (Lispm) for checking it out with the new Silicon Macs and LispWorks 7.1 (I only have 6.1), for spreading the word, and for tipps regarding the application delivery process (FLI template collection).

    Enjoy!

    Visit original content creator repository https://github.com/lambdamikel/Common-Lisp-Tangram-Solver
  • Canducci.ZipCode

    Canducci ZipCode and AddressCode

    Canducci.Zip

    Version NuGet Github Workflows

    Instalação do Pacote

    PM> Install-Package Canducci.Zip
    

    Utilização

    Busca de CEP

    Para a busca de dados de um determinado número de CEP instancie a classe ZipCodeLoad e no seu método Find ou FindAsync passe o número e se a resposta estiver satisfatória os dados estão na classe de resultado ZipCodeResult, exemplo:

    IZipCodeLoad zipCodeLoad = new ZipCodeLoad();
    ZipCodeResult result0 = zipCodeLoad.Find("01001000");
    if (result0) // ou result0.IsValid
    {
        ZipCodeItem zipCodeItem = result0; //ou zipCodeItem zipCodeItem = result0.Value;
    }

    A saída da classe ZipCodeItem é a seguir:

    ZipCodeItem.Zip        // cep
    ZipCodeItem.Address    // logradouro
    ZipCodeItem.District   // bairro
    ZipCodeItem.City       // localidade
    ZipCodeItem.Uf         // uf
    ZipCodeItem.Ibge       // Ibge
    ZipCodeItem.Complement // complemento
    ZipCodeItem.Gia        // gia
    ZipCodeItem.Siafi      // siafi
    ZipCodeItem.Ddd        // ddd
    ZipCodeItem.Unity      // unidade
    ZipCodeItem.State      // estado
    ZipCodeItem.Region     // região
    

    Busca de uma lista de CEP

    IAddressCodeLoad addressCodeLoad = new AddressCodeLoad();
    AddressCode addressCode = AddressCode.Parse(ZipCodeUf.SP, "SÃO PAULO", "AVE");
    AddressCodeResult result1 = addressCodeLoad.Find(addressCode);
    if (result1) // ou result1.IsValid
    {
        AddressCodeItem items = result1; // ou  AddressCodeItem items = result1.Value;
    }

    Note que a saída da classe AddressCodeItem é uma enumeração (List<>) de ZipCodeItem, ou seja, List<ZipCodeItem>.

    Lista de Unidade Federativa

    Tanto no objeto de instancia das classes AddressCodeLoad e ZipCodeLoad existe um método de extensão que retorna um IDictionary<string, string> que pode ser utilizado em suas telas para padronização da informação, exemplo:

    IZipCodeLoad zipCodeLoad = new ZipCodeLoad();
    IAddressCodeLoad addressCodeLoad = new AddressCodeLoad();
    
    System.Collections.Generic.IDictionary<string, string> items0_List = zipCodeLoad.UFToList(); 
    System.Collections.Generic.IDictionary<string, string> items1_List = addressCodeLoad.UFToList();
    

    Essas duas variaveis (items0_List e items1_List) são da mesma instância e são geradas a partir do Enum ZipCodeUf o mesmo para facilitar na criação de suas interfaces de programação.

    Aplicação MVC Core

    No exemplo do controller TestController tem dois métodos:

    [HttpPost]
    public async Task<IActionResult> Index([FromBody] ZipCodeData data)
    {
     if (ModelState.IsValid)
     {
    	if (ZipCode.TryParse(data.Value, out ZipCode zipCode))
    	{
    	   ZipCodeResult zipCodeResult = await ZipCodeLoad.FindAsync(zipCode);
    	   return Json(zipCodeResult);
    	}
     }
     return Json(new { IsValid = false });
    }
    
    [HttpPost]
    public async Task<IActionResult> Address([FromBody] AddressCodeData data)
    {
     if (AddressCode.TryParse(ParseZipCodeUf(data.Uf), data.City, data.Address, out AddressCode addressCode))
     {
    	AddressCodeResult addressCodeResult = await AddressCodeLoad.FindAsync(addressCode);
    	return Json(addressCodeResult);
     }
     return Json(new { IsValid = false });
    }
    
    private ZipCodeUf ParseZipCodeUf(string uf) => Enum.Parse<ZipCodeUf>(uf);

    com o código que retornar um json para Front-End de como utilizar o pacote Canducci.Zip e nesse link todo o projeto exemplo.

    Visit original content creator repository https://github.com/fulviocanducci/Canducci.ZipCode
  • webpack-bundle-analyzer

    npm node tests downloads

    Webpack Bundle Analyzer

    Visualize size of webpack output files with an interactive zoomable treemap.

    Install

    # NPM
    npm install --save-dev webpack-bundle-analyzer
    # Yarn
    yarn add -D webpack-bundle-analyzer

    Usage (as a plugin)

    const BundleAnalyzerPlugin = require('webpack-bundle-analyzer').BundleAnalyzerPlugin;
    
    module.exports = {
      plugins: [
        new BundleAnalyzerPlugin()
      ]
    }

    It will create an interactive treemap visualization of the contents of all your bundles.

    webpack bundle analyzer zoomable treemap

    This module will help you:

    1. Realize what’s really inside your bundle
    2. Find out what modules make up the most of its size
    3. Find modules that got there by mistake
    4. Optimize it!

    And the best thing is it supports minified bundles! It parses them to get real size of bundled modules. And it also shows their gzipped or Brotli sizes!

    Options (for plugin)

    new BundleAnalyzerPlugin(options?: object)
    Name Type Description
    analyzerMode One of: server, static, json, disabled Default: server. In server mode analyzer will start HTTP server to show bundle report. In static mode single HTML file with bundle report will be generated. In json mode single JSON file with bundle report will be generated. In disabled mode you can use this plugin to just generate Webpack Stats JSON file by setting generateStatsFile to true.
    analyzerHost {String} Default: 127.0.0.1. Host that will be used in server mode to start HTTP server.
    analyzerPort {Number} or auto Default: 8888. Port that will be used in server mode to start HTTP server. If analyzerPort is auto, the operating system will assign an arbitrary unused port
    analyzerUrl {Function} called with { listenHost: string, listenHost: string, boundAddress: server.address}. server.address comes from Node.js Default: http://${listenHost}:${boundAddress.port}. The URL printed to console with server mode.
    reportFilename {String} Default: report.html. Path to bundle report file that will be generated in static mode. It can be either an absolute path or a path relative to a bundle output directory (which is output.path in webpack config).
    reportTitle {String|function} Default: function that returns pretty printed current date and time. Content of the HTML title element; or a function of the form () => string that provides the content.
    defaultSizes One of: stat, parsed, gzip, brotli Default: parsed. Module sizes to show in report by default. Size definitions section describes what these values mean.
    compressionAlgorithm One of: gzip, brotli Default: gzip. Compression type used to calculate the compressed module sizes.
    openAnalyzer {Boolean} Default: true. Automatically open report in default browser.
    generateStatsFile {Boolean} Default: false. If true, webpack stats JSON file will be generated in bundle output directory
    statsFilename {String} Default: stats.json. Name of webpack stats JSON file that will be generated if generateStatsFile is true. It can be either an absolute path or a path relative to a bundle output directory (which is output.path in webpack config).
    statsOptions null or {Object} Default: null. Options for stats.toJson() method. For example you can exclude sources of your modules from stats file with source: false option. See more options here.
    excludeAssets {null|pattern|pattern[]} where pattern equals to {String|RegExp|function} Default: null. Patterns that will be used to match against asset names to exclude them from the report. If pattern is a string it will be converted to RegExp via new RegExp(str). If pattern is a function it should have the following signature (assetName: string) => boolean and should return true to exclude matching asset. If multiple patterns are provided asset should match at least one of them to be excluded.
    logLevel One of: info, warn, error, silent Default: info. Used to control how much details the plugin outputs.

    Usage (as a CLI utility)

    You can analyze an existing bundle if you have a webpack stats JSON file.

    You can generate it using BundleAnalyzerPlugin with generateStatsFile option set to true or with this simple command:

    webpack --profile --json > stats.json

    If you’re on Windows and using PowerShell, you can generate the stats file with this command to avoid BOM issues:

    webpack --profile --json | Out-file 'stats.json' -Encoding OEM
    

    Then you can run the CLI tool.

    webpack-bundle-analyzer bundle/output/path/stats.json
    

    Options (for CLI)

    webpack-bundle-analyzer <bundleStatsFile> [bundleDir] [options]

    Arguments are documented below:

    bundleStatsFile

    Path to webpack stats JSON file

    bundleDir

    Directory containing all generated bundles.

    options

      -V, --version                   output the version number
      -m, --mode <mode>               Analyzer mode. Should be `server`, `static` or `json`.
                                      In `server` mode analyzer will start HTTP server to show bundle report.
                                      In `static` mode single HTML file with bundle report will be generated.
                                      In `json` mode single JSON file with bundle report will be generated. (default: server)
      -h, --host <host>               Host that will be used in `server` mode to start HTTP server. (default: 127.0.0.1)
      -p, --port <n>                  Port that will be used in `server` mode to start HTTP server. Should be a number or `auto` (default: 8888)
      -r, --report <file>             Path to bundle report file that will be generated in `static` mode. (default: report.html)
      -t, --title <title>             String to use in title element of html report. (default: pretty printed current date)
      -s, --default-sizes <type>      Module sizes to show in treemap by default.
                                      Possible values: stat, parsed, gzip, brotli (default: parsed)
      --compression-algorithm <type>  Compression algorithm that will be used to calculate the compressed module sizes.
                                      Possible values: gzip, brotli (default: gzip)
      -O, --no-open                   Don't open report in default browser automatically.
      -e, --exclude <regexp>          Assets that should be excluded from the report.
                                      Can be specified multiple times.
      -l, --log-level <level>         Log level.
                                      Possible values: debug, info, warn, error, silent (default: info)
      -h, --help                      output usage information
    

    Size definitions

    webpack-bundle-analyzer reports three values for sizes. defaultSizes can be used to control which of these is shown by default. The different reported sizes are:

    stat

    This is the “input” size of your files, before any transformations like minification.

    It is called “stat size” because it’s obtained from Webpack’s stats object.

    parsed

    This is the “output” size of your files. If you’re using a Webpack plugin such as Uglify, then this value will reflect the minified size of your code.

    gzip

    This is the size of running the parsed bundles/modules through gzip compression.

    brotli

    This is the size of running the parsed bundles/modules through Brotli compression.

    Selecting Which Chunks to Display

    When opened, the report displays all of the Webpack chunks for your project. It’s possible to filter to a more specific list of chunks by using the sidebar or the chunk context menu.

    Sidebar

    The Sidebar Menu can be opened by clicking the > button at the top left of the report. You can select or deselect chunks to display under the “Show chunks” heading there.

    Chunk Context Menu

    The Chunk Context Menu can be opened by right-clicking or Ctrl-clicking on a specific chunk in the report. It provides the following options:

    • Hide chunk: Hides the selected chunk
    • Hide all other chunks: Hides all chunks besides the selected one
    • Show all chunks: Un-hides any hidden chunks, returning the report to its initial, unfiltered view

    Troubleshooting

    I don’t see gzip or parsed sizes, it only shows stat size

    It happens when webpack-bundle-analyzer analyzes files that don’t actually exist in your file system, for example when you work with webpack-dev-server that keeps all the files in RAM. If you use webpack-bundle-analyzer as a plugin you won’t get any errors, however if you run it via CLI you get the error message in terminal:

    Error parsing bundle asset "your_bundle_name.bundle.js": no such file
    No bundles were parsed. Analyzer will show only original module sizes from stats file.
    

    To get more information about it you can read issue #147.

    Other tools

    • Statoscope – Webpack bundle analyzing tool to find out why a certain module was bundled (and more features, including interactive treemap)

    Maintainers


    Yuriy Grunin

    Vesa Laakso

    Contributing

    Check out CONTRIBUTING.md for instructions on contributing 🎉

    Visit original content creator repository https://github.com/webpack-contrib/webpack-bundle-analyzer
  • transpar-liam-ent

    transpar[liam]ent

    This is a WIP; ideally, it will provide a tool to properly scrape the contents of the “Parlamentarische Anfragen” website of the Austrian parliament (see here).

    The reason behind it is that the website itself is relatively annoying to use, and provides very poor search functionality, i.e. no full-text search nor a direct search for inquierer and receiver.

    Currently, this is merely a collection of very rudimentary functions.

    Run the PoC

    After installing the dependencies (BeautifulSoup4, requests, lxml), you can run the PoC with:

    $ ./PoC.py
    date: 25.10.2021
    inquiry: {'desc': 'Schriftliche Anfrage der Abgeordneten Douglas   Hoyos-Trauttmansdorff, Kolleginnen und Kollegen an die   Bundesministerin für Landesverteidigung betreffend    Zukunft der Hackher-Kaserne', 'from': 'Eingebracht von:    Douglas Hoyos-Trauttmansdorff', 'to': 'Eingebracht an:    Mag. Klaudia Tanner Regierungsmitglied     Bundesministerium für Landesverteidigung', 'refs': [{'url': 'https://www.parlament.gv.at/PAKT/VHG/XXVII/J/J_08414/imfname_1007132.pdf', 'ref': 'Anfrage (gescanntes Original) / PDF, 648 KB'}, {'url': 'https://www.parlament.gv.at/PAKT/VHG/XXVII/J/J_08414/fname_1007151.pdf', 'ref': 'Anfrage (elektr. übermittelte Version) / PDF, 85 KB'}, {'url': 'https://www.parlament.gv.at/PAKT/VHG/XXVII/J/J_08414/fnameorig_1007151.html', 'ref': 'HTML, 85 KB'}]}
    number: 8414/J
    progress: has not been answered
    relative url: /PAKT/VHG/XXVII/J/J_08414/index.shtml
    title: Zukunft der Hackher-Kaserne (BMLV) - Frist für die Beantwortung 25.12.2021
    url: https://www.parlament.gv.at/PAKT/VHG/XXVII/J/J_08414/index.shtml
    

    Visit original content creator repository
    https://github.com/krauthex/transpar-liam-ent

  • project-job-insights

    Visit original content creator repository
    https://github.com/ChrisMoura2000/project-job-insights

  • Bolao_copa

    Sistema de Bolão da Copa em PHP

    Este é um sistema simples de bolão da Copa implementado em PHP. O sistema permite que os usuários se cadastrem, criem palpites para os jogos da Copa do Mundo e acompanhem a pontuação ao longo do torneio.

    Requisitos

    Certifique-se de que o seu ambiente de desenvolvimento atenda aos seguintes requisitos:

    • PHP 7.0 ou superior
    • MySQL ou outro sistema de gerenciamento de banco de dados compatível com PHP
    • Servidor web (por exemplo, Apache)

    Configuração

    1. Clone o repositório:

      https://github.com/Igorpereirag/Bolao_copa.git
    2. Configure o banco de dados:

      • Importe o arquivo database.sql no seu sistema de gerenciamento de banco de dados para criar a estrutura necessária.
      • Edite o arquivo config.php com as informações do seu banco de dados.
    3. Inicie o servidor web:

      php -S localhost:8000
    4. Acesse o sistema no navegador:

      http://localhost:8000

    Funcionalidades

    • Cadastro de Usuários: Os usuários podem se cadastrar no sistema.
    • Palpites da Copa: Os usuários podem fazer palpites para os jogos da Copa.
    • Pontuação: O sistema calcula automaticamente a pontuação dos usuários com base nos resultados dos jogos.
    • Classificação: Exibe a classificação dos participantes do bolão.

    Contribuição

    Contribuições são bem-vindas! Se encontrar bugs ou tiver sugestões para melhorias, sinta-se à vontade para abrir uma issue ou enviar um pull request.

    Visit original content creator repository
    https://github.com/igorpereirag/Bolao_copa

  • Android-WordToSpan

    Android-WordToSpan

    Android Arsenal Release

    Small & simple library to create a link url, mail, mention, tag and text highlighter in textView.

    Gradle

    allprojects {
       repositories {
          ...
          maven { url 'https://jitpack.io' }
       }
    }
    
    dependencies {
        ...
        implementation 'com.github.bachors:Android-WordToSpan:3.1'
    }
    

    Create Link

    gif

    String myText = "I know http://just.com/anu how to @whisper, And I #know just #how to cry,I know just @where to anu@find.com the answers";
    TextView tv = (TextView)findViewById(R.id.tx);
    
    WordToSpan link = new WordToSpan();
    link.setColorTAG(Color.GREEN)
    	.setColorURL(Color.MAGENTA)
    	.setColorPHONE(Color.RED)
    	.setColorMAIL(getResources().getColor(R.color.colorPrimary))
    	.setColorMENTION(getResources().getColor(R.color.colorAccent))
    	.setUnderlineURL(true)
    	.setLink(myText)
    	.into(tv)
    	.setClickListener(new WordToSpan.ClickListener() {
    		@Override
    		public void onClick(String type, String text) {
    			// type: "tag", "mail", "url", "phone", "mention" or "custom"
    			Toast.makeText(getApplication(), "Type: " + type + "\nText: " + text, Toast.LENGTH_LONG).show();
    		}
    	});

    Create Highlight

    gif

    String myText = "Any code and resources in the Android library anywhere love code.";
    String keyWord = "any code";
    TextView tv = (TextView)findViewById(R.id.tx);
    
    WordToSpan highlight = new WordToSpan();
    highlight.setBackgroundHIGHLIGHT(Color.YELLOW)
    	.setColorHIGHLIGHT(Color.RED)
    	.setHighlight(myText, keyWord)
    	.into(tv);

    MIT

    Visit original content creator repository https://github.com/bachors/Android-WordToSpan